00:00:00.001 Started by upstream project "autotest-spdk-v24.01-LTS-vs-dpdk-v22.11" build number 622 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3287 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.125 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.126 The recommended git tool is: git 00:00:00.126 using credential 00000000-0000-0000-0000-000000000002 00:00:00.128 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.174 Fetching changes from the remote Git repository 00:00:00.178 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.224 Using shallow fetch with depth 1 00:00:00.224 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.224 > git --version # timeout=10 00:00:00.251 > git --version # 'git version 2.39.2' 00:00:00.251 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.269 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.269 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.680 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.692 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.704 Checking out Revision 16485855f227725e8e9566ee24d00b82aaeff0db (FETCH_HEAD) 00:00:05.704 > git config core.sparsecheckout # timeout=10 00:00:05.716 > git read-tree -mu HEAD # timeout=10 00:00:05.731 > git checkout -f 16485855f227725e8e9566ee24d00b82aaeff0db # timeout=5 00:00:05.753 Commit message: "ansible/inventory: fix WFP37 mac address" 00:00:05.753 > git rev-list --no-walk 16485855f227725e8e9566ee24d00b82aaeff0db # timeout=10 00:00:05.896 [Pipeline] Start of Pipeline 00:00:05.910 [Pipeline] library 00:00:05.912 Loading library shm_lib@master 00:00:05.912 Library shm_lib@master is cached. Copying from home. 00:00:05.928 [Pipeline] node 00:25:27.060 Resuming build at Mon Jul 22 12:47:46 UTC 2024 after Jenkins restart 00:25:27.066 Ready to run at Mon Jul 22 12:47:46 UTC 2024 00:25:30.938 Running on VM-host-SM16 in /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:25:30.960 [Pipeline] { 00:25:31.193 [Pipeline] catchError 00:25:31.203 [Pipeline] { 00:25:31.250 [Pipeline] wrap 00:25:31.272 [Pipeline] { 00:25:31.284 [Pipeline] stage 00:25:31.287 [Pipeline] { (Prologue) 00:25:31.432 [Pipeline] echo 00:25:31.435 Node: VM-host-SM16 00:25:31.449 [Pipeline] cleanWs 00:25:31.501 [WS-CLEANUP] Deleting project workspace... 00:25:31.501 [WS-CLEANUP] Deferred wipeout is used... 00:25:31.509 [WS-CLEANUP] done 00:25:31.896 [Pipeline] setCustomBuildProperty 00:25:31.993 [Pipeline] httpRequest 00:25:33.785 [Pipeline] echo 00:25:33.787 Sorcerer 10.211.164.101 is alive 00:25:33.799 [Pipeline] httpRequest 00:25:33.806 HttpMethod: GET 00:25:33.806 URL: http://10.211.164.101/packages/jbp_16485855f227725e8e9566ee24d00b82aaeff0db.tar.gz 00:25:33.807 Sending request to url: http://10.211.164.101/packages/jbp_16485855f227725e8e9566ee24d00b82aaeff0db.tar.gz 00:25:33.822 Response Code: HTTP/1.1 200 OK 00:25:33.823 Success: Status code 200 is in the accepted range: 200,404 00:25:33.823 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp_16485855f227725e8e9566ee24d00b82aaeff0db.tar.gz 00:25:35.331 [Pipeline] sh 00:25:35.619 + tar --no-same-owner -xf jbp_16485855f227725e8e9566ee24d00b82aaeff0db.tar.gz 00:25:35.639 [Pipeline] httpRequest 00:25:35.670 [Pipeline] echo 00:25:35.671 Sorcerer 10.211.164.101 is alive 00:25:35.684 [Pipeline] httpRequest 00:25:35.689 HttpMethod: GET 00:25:35.689 URL: http://10.211.164.101/packages/spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:25:35.691 Sending request to url: http://10.211.164.101/packages/spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:25:35.710 Response Code: HTTP/1.1 200 OK 00:25:35.710 Success: Status code 200 is in the accepted range: 200,404 00:25:35.711 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:26:23.781 [Pipeline] sh 00:26:24.068 + tar --no-same-owner -xf spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:26:27.375 [Pipeline] sh 00:26:27.659 + git -C spdk log --oneline -n5 00:26:27.659 4b94202c6 lib/event: Bug fix for framework_set_scheduler 00:26:27.659 507e9ba07 nvme: add lock_depth for ctrlr_lock 00:26:27.659 62fda7b5f nvme: check pthread_mutex_destroy() return value 00:26:27.659 e03c164a1 nvme: add nvme_ctrlr_lock 00:26:27.659 d61f89a86 nvme/cuse: Add ctrlr_lock for cuse register and unregister 00:26:27.680 [Pipeline] withCredentials 00:26:27.708 > git --version # timeout=10 00:26:27.718 > git --version # 'git version 2.39.2' 00:26:27.739 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:26:27.742 [Pipeline] { 00:26:27.749 [Pipeline] retry 00:26:27.751 [Pipeline] { 00:26:27.766 [Pipeline] sh 00:26:28.233 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:26:28.263 [Pipeline] } 00:26:28.286 [Pipeline] // retry 00:26:28.292 [Pipeline] } 00:26:28.316 [Pipeline] // withCredentials 00:26:28.328 [Pipeline] httpRequest 00:26:28.352 [Pipeline] echo 00:26:28.354 Sorcerer 10.211.164.101 is alive 00:26:28.366 [Pipeline] httpRequest 00:26:28.371 HttpMethod: GET 00:26:28.372 URL: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:26:28.374 Sending request to url: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:26:28.385 Response Code: HTTP/1.1 200 OK 00:26:28.386 Success: Status code 200 is in the accepted range: 200,404 00:26:28.386 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:26:40.010 [Pipeline] sh 00:26:40.296 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:26:42.213 [Pipeline] sh 00:26:42.497 + git -C dpdk log --oneline -n5 00:26:42.497 caf0f5d395 version: 22.11.4 00:26:42.497 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:26:42.497 dc9c799c7d vhost: fix missing spinlock unlock 00:26:42.497 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:26:42.497 6ef77f2a5e net/gve: fix RX buffer size alignment 00:26:42.515 [Pipeline] writeFile 00:26:42.532 [Pipeline] sh 00:26:42.816 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:26:42.828 [Pipeline] sh 00:26:43.113 + cat autorun-spdk.conf 00:26:43.113 SPDK_RUN_FUNCTIONAL_TEST=1 00:26:43.113 SPDK_TEST_NVMF=1 00:26:43.113 SPDK_TEST_NVMF_TRANSPORT=tcp 00:26:43.113 SPDK_TEST_USDT=1 00:26:43.113 SPDK_RUN_UBSAN=1 00:26:43.113 SPDK_TEST_NVMF_MDNS=1 00:26:43.113 NET_TYPE=virt 00:26:43.113 SPDK_JSONRPC_GO_CLIENT=1 00:26:43.113 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:26:43.113 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:26:43.113 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:26:43.121 RUN_NIGHTLY=1 00:26:43.123 [Pipeline] } 00:26:43.139 [Pipeline] // stage 00:26:43.156 [Pipeline] stage 00:26:43.159 [Pipeline] { (Run VM) 00:26:43.175 [Pipeline] sh 00:26:43.460 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:26:43.460 + echo 'Start stage prepare_nvme.sh' 00:26:43.460 Start stage prepare_nvme.sh 00:26:43.460 + [[ -n 3 ]] 00:26:43.460 + disk_prefix=ex3 00:26:43.460 + [[ -n /var/jenkins/workspace/nvmf-tcp-vg-autotest ]] 00:26:43.460 + [[ -e /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf ]] 00:26:43.460 + source /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf 00:26:43.460 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:26:43.460 ++ SPDK_TEST_NVMF=1 00:26:43.460 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:26:43.460 ++ SPDK_TEST_USDT=1 00:26:43.460 ++ SPDK_RUN_UBSAN=1 00:26:43.460 ++ SPDK_TEST_NVMF_MDNS=1 00:26:43.460 ++ NET_TYPE=virt 00:26:43.460 ++ SPDK_JSONRPC_GO_CLIENT=1 00:26:43.460 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:26:43.460 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:26:43.460 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:26:43.460 ++ RUN_NIGHTLY=1 00:26:43.460 + cd /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:26:43.460 + nvme_files=() 00:26:43.460 + declare -A nvme_files 00:26:43.460 + backend_dir=/var/lib/libvirt/images/backends 00:26:43.460 + nvme_files['nvme.img']=5G 00:26:43.460 + nvme_files['nvme-cmb.img']=5G 00:26:43.460 + nvme_files['nvme-multi0.img']=4G 00:26:43.460 + nvme_files['nvme-multi1.img']=4G 00:26:43.460 + nvme_files['nvme-multi2.img']=4G 00:26:43.460 + nvme_files['nvme-openstack.img']=8G 00:26:43.460 + nvme_files['nvme-zns.img']=5G 00:26:43.460 + (( SPDK_TEST_NVME_PMR == 1 )) 00:26:43.460 + (( SPDK_TEST_FTL == 1 )) 00:26:43.460 + (( SPDK_TEST_NVME_FDP == 1 )) 00:26:43.460 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:26:43.460 + for nvme in "${!nvme_files[@]}" 00:26:43.460 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi2.img -s 4G 00:26:43.460 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:26:43.460 + for nvme in "${!nvme_files[@]}" 00:26:43.460 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-cmb.img -s 5G 00:26:43.460 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:26:43.460 + for nvme in "${!nvme_files[@]}" 00:26:43.460 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-openstack.img -s 8G 00:26:43.460 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:26:43.460 + for nvme in "${!nvme_files[@]}" 00:26:43.460 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-zns.img -s 5G 00:26:43.460 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:26:43.460 + for nvme in "${!nvme_files[@]}" 00:26:43.460 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi1.img -s 4G 00:26:43.460 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:26:43.460 + for nvme in "${!nvme_files[@]}" 00:26:43.460 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi0.img -s 4G 00:26:43.460 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:26:43.460 + for nvme in "${!nvme_files[@]}" 00:26:43.460 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme.img -s 5G 00:26:43.737 Formatting '/var/lib/libvirt/images/backends/ex3-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:26:43.737 ++ sudo grep -rl ex3-nvme.img /etc/libvirt/qemu 00:26:43.737 + echo 'End stage prepare_nvme.sh' 00:26:43.737 End stage prepare_nvme.sh 00:26:43.760 [Pipeline] sh 00:26:44.040 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:26:44.040 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex3-nvme.img -b /var/lib/libvirt/images/backends/ex3-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img -H -a -v -f fedora38 00:26:44.040 00:26:44.040 DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant 00:26:44.040 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk 00:26:44.040 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-vg-autotest 00:26:44.040 HELP=0 00:26:44.040 DRY_RUN=0 00:26:44.040 NVME_FILE=/var/lib/libvirt/images/backends/ex3-nvme.img,/var/lib/libvirt/images/backends/ex3-nvme-multi0.img, 00:26:44.040 NVME_DISKS_TYPE=nvme,nvme, 00:26:44.040 NVME_AUTO_CREATE=0 00:26:44.040 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img, 00:26:44.040 NVME_CMB=,, 00:26:44.040 NVME_PMR=,, 00:26:44.040 NVME_ZNS=,, 00:26:44.040 NVME_MS=,, 00:26:44.040 NVME_FDP=,, 00:26:44.040 SPDK_VAGRANT_DISTRO=fedora38 00:26:44.040 SPDK_VAGRANT_VMCPU=10 00:26:44.040 SPDK_VAGRANT_VMRAM=12288 00:26:44.040 SPDK_VAGRANT_PROVIDER=libvirt 00:26:44.040 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:26:44.040 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:26:44.040 SPDK_OPENSTACK_NETWORK=0 00:26:44.040 VAGRANT_PACKAGE_BOX=0 00:26:44.040 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:26:44.040 FORCE_DISTRO=true 00:26:44.040 VAGRANT_BOX_VERSION= 00:26:44.040 EXTRA_VAGRANTFILES= 00:26:44.040 NIC_MODEL=e1000 00:26:44.040 00:26:44.040 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt' 00:26:44.040 /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:26:48.234 Bringing machine 'default' up with 'libvirt' provider... 00:26:48.804 ==> default: Creating image (snapshot of base box volume). 00:26:48.804 ==> default: Creating domain with the following settings... 00:26:48.804 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721652547_e7572518849e15f0a197 00:26:48.804 ==> default: -- Domain type: kvm 00:26:48.804 ==> default: -- Cpus: 10 00:26:48.804 ==> default: -- Feature: acpi 00:26:48.805 ==> default: -- Feature: apic 00:26:48.805 ==> default: -- Feature: pae 00:26:48.805 ==> default: -- Memory: 12288M 00:26:48.805 ==> default: -- Memory Backing: hugepages: 00:26:48.805 ==> default: -- Management MAC: 00:26:48.805 ==> default: -- Loader: 00:26:48.805 ==> default: -- Nvram: 00:26:48.805 ==> default: -- Base box: spdk/fedora38 00:26:48.805 ==> default: -- Storage pool: default 00:26:48.805 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721652547_e7572518849e15f0a197.img (20G) 00:26:48.805 ==> default: -- Volume Cache: default 00:26:48.805 ==> default: -- Kernel: 00:26:48.805 ==> default: -- Initrd: 00:26:48.805 ==> default: -- Graphics Type: vnc 00:26:48.805 ==> default: -- Graphics Port: -1 00:26:48.805 ==> default: -- Graphics IP: 127.0.0.1 00:26:48.805 ==> default: -- Graphics Password: Not defined 00:26:48.805 ==> default: -- Video Type: cirrus 00:26:48.805 ==> default: -- Video VRAM: 9216 00:26:48.805 ==> default: -- Sound Type: 00:26:48.805 ==> default: -- Keymap: en-us 00:26:48.805 ==> default: -- TPM Path: 00:26:48.805 ==> default: -- INPUT: type=mouse, bus=ps2 00:26:48.805 ==> default: -- Command line args: 00:26:48.805 ==> default: -> value=-device, 00:26:48.805 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:26:48.805 ==> default: -> value=-drive, 00:26:48.805 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme.img,if=none,id=nvme-0-drive0, 00:26:48.805 ==> default: -> value=-device, 00:26:48.805 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:26:48.805 ==> default: -> value=-device, 00:26:48.805 ==> default: -> value=nvme,id=nvme-1,serial=12341, 00:26:48.805 ==> default: -> value=-drive, 00:26:48.805 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:26:48.805 ==> default: -> value=-device, 00:26:48.805 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:26:48.805 ==> default: -> value=-drive, 00:26:48.805 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:26:48.805 ==> default: -> value=-device, 00:26:48.805 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:26:48.805 ==> default: -> value=-drive, 00:26:48.805 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:26:48.805 ==> default: -> value=-device, 00:26:48.805 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:26:49.066 ==> default: Creating shared folders metadata... 00:26:49.066 ==> default: Starting domain. 00:26:50.447 ==> default: Waiting for domain to get an IP address... 00:27:12.461 ==> default: Waiting for SSH to become available... 00:27:12.461 ==> default: Configuring and enabling network interfaces... 00:27:14.998 default: SSH address: 192.168.121.24:22 00:27:14.998 default: SSH username: vagrant 00:27:14.998 default: SSH auth method: private key 00:27:16.905 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:27:23.473 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:27:30.040 ==> default: Mounting SSHFS shared folder... 00:27:31.422 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:27:31.422 ==> default: Checking Mount.. 00:27:32.360 ==> default: Folder Successfully Mounted! 00:27:32.360 ==> default: Running provisioner: file... 00:27:33.296 default: ~/.gitconfig => .gitconfig 00:27:33.555 00:27:33.555 SUCCESS! 00:27:33.555 00:27:33.555 cd to /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:27:33.555 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:27:33.555 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:27:33.555 00:27:33.564 [Pipeline] } 00:27:33.583 [Pipeline] // stage 00:27:33.595 [Pipeline] dir 00:27:33.595 Running in /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt 00:27:33.597 [Pipeline] { 00:27:33.613 [Pipeline] catchError 00:27:33.615 [Pipeline] { 00:27:33.629 [Pipeline] sh 00:27:33.911 + vagrant ssh-config --host vagrant 00:27:33.911 + sed -ne /^Host/,$p 00:27:33.911 + tee ssh_conf 00:27:37.200 Host vagrant 00:27:37.200 HostName 192.168.121.24 00:27:37.200 User vagrant 00:27:37.200 Port 22 00:27:37.200 UserKnownHostsFile /dev/null 00:27:37.200 StrictHostKeyChecking no 00:27:37.200 PasswordAuthentication no 00:27:37.200 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:27:37.200 IdentitiesOnly yes 00:27:37.200 LogLevel FATAL 00:27:37.200 ForwardAgent yes 00:27:37.200 ForwardX11 yes 00:27:37.200 00:27:37.212 [Pipeline] withEnv 00:27:37.214 [Pipeline] { 00:27:37.226 [Pipeline] sh 00:27:37.507 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:27:37.507 source /etc/os-release 00:27:37.507 [[ -e /image.version ]] && img=$(< /image.version) 00:27:37.507 # Minimal, systemd-like check. 00:27:37.507 if [[ -e /.dockerenv ]]; then 00:27:37.507 # Clear garbage from the node's name: 00:27:37.507 # agt-er_autotest_547-896 -> autotest_547-896 00:27:37.507 # $HOSTNAME is the actual container id 00:27:37.507 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:27:37.507 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:27:37.507 # We can assume this is a mount from a host where container is running, 00:27:37.507 # so fetch its hostname to easily identify the target swarm worker. 00:27:37.507 container="$(< /etc/hostname) ($agent)" 00:27:37.507 else 00:27:37.507 # Fallback 00:27:37.507 container=$agent 00:27:37.507 fi 00:27:37.507 fi 00:27:37.507 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:27:37.507 00:27:37.518 [Pipeline] } 00:27:37.538 [Pipeline] // withEnv 00:27:37.547 [Pipeline] setCustomBuildProperty 00:27:37.562 [Pipeline] stage 00:27:37.564 [Pipeline] { (Tests) 00:27:37.581 [Pipeline] sh 00:27:37.861 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:27:37.875 [Pipeline] sh 00:27:38.159 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:27:38.175 [Pipeline] timeout 00:27:38.175 Timeout set to expire in 40 min 00:27:38.178 [Pipeline] { 00:27:38.195 [Pipeline] sh 00:27:38.477 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:27:39.044 HEAD is now at 4b94202c6 lib/event: Bug fix for framework_set_scheduler 00:27:39.060 [Pipeline] sh 00:27:39.341 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:27:39.614 [Pipeline] sh 00:27:39.898 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:27:39.915 [Pipeline] sh 00:27:40.197 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-vg-autotest ./autoruner.sh spdk_repo 00:27:40.197 ++ readlink -f spdk_repo 00:27:40.197 + DIR_ROOT=/home/vagrant/spdk_repo 00:27:40.197 + [[ -n /home/vagrant/spdk_repo ]] 00:27:40.197 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:27:40.197 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:27:40.197 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:27:40.197 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:27:40.197 + [[ -d /home/vagrant/spdk_repo/output ]] 00:27:40.197 + [[ nvmf-tcp-vg-autotest == pkgdep-* ]] 00:27:40.197 + cd /home/vagrant/spdk_repo 00:27:40.197 + source /etc/os-release 00:27:40.197 ++ NAME='Fedora Linux' 00:27:40.197 ++ VERSION='38 (Cloud Edition)' 00:27:40.197 ++ ID=fedora 00:27:40.197 ++ VERSION_ID=38 00:27:40.197 ++ VERSION_CODENAME= 00:27:40.197 ++ PLATFORM_ID=platform:f38 00:27:40.197 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:27:40.197 ++ ANSI_COLOR='0;38;2;60;110;180' 00:27:40.197 ++ LOGO=fedora-logo-icon 00:27:40.197 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:27:40.197 ++ HOME_URL=https://fedoraproject.org/ 00:27:40.197 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:27:40.197 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:27:40.197 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:27:40.197 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:27:40.197 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:27:40.197 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:27:40.197 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:27:40.197 ++ SUPPORT_END=2024-05-14 00:27:40.197 ++ VARIANT='Cloud Edition' 00:27:40.197 ++ VARIANT_ID=cloud 00:27:40.197 + uname -a 00:27:40.456 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:27:40.456 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:27:40.456 Hugepages 00:27:40.456 node hugesize free / total 00:27:40.456 node0 1048576kB 0 / 0 00:27:40.456 node0 2048kB 0 / 0 00:27:40.456 00:27:40.456 Type BDF Vendor Device NUMA Driver Device Block devices 00:27:40.456 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:27:40.456 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:27:40.456 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:27:40.456 + rm -f /tmp/spdk-ld-path 00:27:40.456 + source autorun-spdk.conf 00:27:40.456 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:27:40.456 ++ SPDK_TEST_NVMF=1 00:27:40.456 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:27:40.456 ++ SPDK_TEST_USDT=1 00:27:40.456 ++ SPDK_RUN_UBSAN=1 00:27:40.456 ++ SPDK_TEST_NVMF_MDNS=1 00:27:40.456 ++ NET_TYPE=virt 00:27:40.456 ++ SPDK_JSONRPC_GO_CLIENT=1 00:27:40.456 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:27:40.456 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:27:40.456 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:27:40.456 ++ RUN_NIGHTLY=1 00:27:40.456 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:27:40.456 + [[ -n '' ]] 00:27:40.456 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:27:40.456 + for M in /var/spdk/build-*-manifest.txt 00:27:40.456 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:27:40.456 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:27:40.715 + for M in /var/spdk/build-*-manifest.txt 00:27:40.715 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:27:40.715 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:27:40.715 ++ uname 00:27:40.715 + [[ Linux == \L\i\n\u\x ]] 00:27:40.715 + sudo dmesg -T 00:27:40.715 + sudo dmesg --clear 00:27:40.715 + dmesg_pid=5982 00:27:40.715 + sudo dmesg -Tw 00:27:40.715 + [[ Fedora Linux == FreeBSD ]] 00:27:40.715 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:27:40.715 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:27:40.715 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:27:40.715 + [[ -x /usr/src/fio-static/fio ]] 00:27:40.715 + export FIO_BIN=/usr/src/fio-static/fio 00:27:40.715 + FIO_BIN=/usr/src/fio-static/fio 00:27:40.715 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:27:40.715 + [[ ! -v VFIO_QEMU_BIN ]] 00:27:40.715 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:27:40.715 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:27:40.715 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:27:40.715 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:27:40.715 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:27:40.715 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:27:40.715 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:27:40.715 Test configuration: 00:27:40.715 SPDK_RUN_FUNCTIONAL_TEST=1 00:27:40.715 SPDK_TEST_NVMF=1 00:27:40.715 SPDK_TEST_NVMF_TRANSPORT=tcp 00:27:40.715 SPDK_TEST_USDT=1 00:27:40.715 SPDK_RUN_UBSAN=1 00:27:40.715 SPDK_TEST_NVMF_MDNS=1 00:27:40.715 NET_TYPE=virt 00:27:40.715 SPDK_JSONRPC_GO_CLIENT=1 00:27:40.715 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:27:40.715 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:27:40.715 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:27:40.715 RUN_NIGHTLY=1 12:50:00 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:40.715 12:50:00 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:27:40.715 12:50:00 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:40.715 12:50:00 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:40.715 12:50:00 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:40.715 12:50:00 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:40.715 12:50:00 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:40.715 12:50:00 -- paths/export.sh@5 -- $ export PATH 00:27:40.716 12:50:00 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:40.716 12:50:00 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:27:40.716 12:50:00 -- common/autobuild_common.sh@435 -- $ date +%s 00:27:40.716 12:50:00 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1721652600.XXXXXX 00:27:40.716 12:50:00 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1721652600.hqrT6l 00:27:40.716 12:50:00 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:27:40.716 12:50:00 -- common/autobuild_common.sh@441 -- $ '[' -n v22.11.4 ']' 00:27:40.716 12:50:00 -- common/autobuild_common.sh@442 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:27:40.716 12:50:00 -- common/autobuild_common.sh@442 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:27:40.716 12:50:00 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:27:40.716 12:50:00 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:27:40.716 12:50:00 -- common/autobuild_common.sh@451 -- $ get_config_params 00:27:40.716 12:50:00 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:27:40.716 12:50:00 -- common/autotest_common.sh@10 -- $ set +x 00:27:40.716 12:50:00 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-avahi --with-golang' 00:27:40.716 12:50:00 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:27:40.716 12:50:00 -- spdk/autobuild.sh@12 -- $ umask 022 00:27:40.716 12:50:00 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:27:40.716 12:50:00 -- spdk/autobuild.sh@16 -- $ date -u 00:27:40.716 Mon Jul 22 12:50:00 PM UTC 2024 00:27:40.716 12:50:00 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:27:40.716 LTS-59-g4b94202c6 00:27:40.716 12:50:00 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:27:40.716 12:50:00 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:27:40.716 12:50:00 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:27:40.716 12:50:00 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:27:40.716 12:50:00 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:27:40.716 12:50:00 -- common/autotest_common.sh@10 -- $ set +x 00:27:40.716 ************************************ 00:27:40.716 START TEST ubsan 00:27:40.716 ************************************ 00:27:40.716 using ubsan 00:27:40.716 12:50:00 -- common/autotest_common.sh@1104 -- $ echo 'using ubsan' 00:27:40.716 00:27:40.716 real 0m0.000s 00:27:40.716 user 0m0.000s 00:27:40.716 sys 0m0.000s 00:27:40.716 12:50:00 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:27:40.716 12:50:00 -- common/autotest_common.sh@10 -- $ set +x 00:27:40.716 ************************************ 00:27:40.716 END TEST ubsan 00:27:40.716 ************************************ 00:27:40.975 12:50:00 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:27:40.975 12:50:00 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:27:40.975 12:50:00 -- common/autobuild_common.sh@427 -- $ run_test build_native_dpdk _build_native_dpdk 00:27:40.975 12:50:00 -- common/autotest_common.sh@1077 -- $ '[' 2 -le 1 ']' 00:27:40.975 12:50:00 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:27:40.975 12:50:00 -- common/autotest_common.sh@10 -- $ set +x 00:27:40.975 ************************************ 00:27:40.975 START TEST build_native_dpdk 00:27:40.975 ************************************ 00:27:40.975 12:50:00 -- common/autotest_common.sh@1104 -- $ _build_native_dpdk 00:27:40.975 12:50:00 -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:27:40.975 12:50:00 -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:27:40.975 12:50:00 -- common/autobuild_common.sh@50 -- $ local compiler_version 00:27:40.975 12:50:00 -- common/autobuild_common.sh@51 -- $ local compiler 00:27:40.975 12:50:00 -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:27:40.975 12:50:00 -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:27:40.975 12:50:00 -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:27:40.975 12:50:00 -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:27:40.975 12:50:00 -- common/autobuild_common.sh@61 -- $ CC=gcc 00:27:40.975 12:50:00 -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:27:40.975 12:50:00 -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:27:40.975 12:50:00 -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:27:40.975 12:50:00 -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:27:40.975 12:50:00 -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:27:40.975 12:50:00 -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:27:40.975 12:50:00 -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:27:40.975 12:50:00 -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:27:40.975 12:50:00 -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:27:40.975 12:50:00 -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:27:40.975 12:50:00 -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:27:40.975 caf0f5d395 version: 22.11.4 00:27:40.975 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:27:40.975 dc9c799c7d vhost: fix missing spinlock unlock 00:27:40.975 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:27:40.975 6ef77f2a5e net/gve: fix RX buffer size alignment 00:27:40.975 12:50:00 -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:27:40.975 12:50:00 -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:27:40.975 12:50:00 -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:27:40.975 12:50:00 -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:27:40.975 12:50:00 -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:27:40.975 12:50:00 -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:27:40.975 12:50:00 -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:27:40.975 12:50:00 -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:27:40.975 12:50:00 -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:27:40.975 12:50:00 -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:27:40.975 12:50:00 -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:27:40.975 12:50:00 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:27:40.975 12:50:00 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:27:40.975 12:50:00 -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:27:40.975 12:50:00 -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:27:40.975 12:50:00 -- common/autobuild_common.sh@168 -- $ uname -s 00:27:40.975 12:50:00 -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:27:40.975 12:50:00 -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:27:40.975 12:50:00 -- scripts/common.sh@372 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:27:40.975 12:50:00 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:27:40.975 12:50:00 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:27:40.975 12:50:00 -- scripts/common.sh@335 -- $ IFS=.-: 00:27:40.975 12:50:00 -- scripts/common.sh@335 -- $ read -ra ver1 00:27:40.975 12:50:00 -- scripts/common.sh@336 -- $ IFS=.-: 00:27:40.976 12:50:00 -- scripts/common.sh@336 -- $ read -ra ver2 00:27:40.976 12:50:00 -- scripts/common.sh@337 -- $ local 'op=<' 00:27:40.976 12:50:00 -- scripts/common.sh@339 -- $ ver1_l=3 00:27:40.976 12:50:00 -- scripts/common.sh@340 -- $ ver2_l=3 00:27:40.976 12:50:00 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:27:40.976 12:50:00 -- scripts/common.sh@343 -- $ case "$op" in 00:27:40.976 12:50:00 -- scripts/common.sh@344 -- $ : 1 00:27:40.976 12:50:00 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:27:40.976 12:50:00 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:40.976 12:50:00 -- scripts/common.sh@364 -- $ decimal 22 00:27:40.976 12:50:00 -- scripts/common.sh@352 -- $ local d=22 00:27:40.976 12:50:00 -- scripts/common.sh@353 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:27:40.976 12:50:00 -- scripts/common.sh@354 -- $ echo 22 00:27:40.976 12:50:00 -- scripts/common.sh@364 -- $ ver1[v]=22 00:27:40.976 12:50:00 -- scripts/common.sh@365 -- $ decimal 21 00:27:40.976 12:50:00 -- scripts/common.sh@352 -- $ local d=21 00:27:40.976 12:50:00 -- scripts/common.sh@353 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:27:40.976 12:50:00 -- scripts/common.sh@354 -- $ echo 21 00:27:40.976 12:50:00 -- scripts/common.sh@365 -- $ ver2[v]=21 00:27:40.976 12:50:00 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:27:40.976 12:50:00 -- scripts/common.sh@366 -- $ return 1 00:27:40.976 12:50:00 -- common/autobuild_common.sh@173 -- $ patch -p1 00:27:40.976 patching file config/rte_config.h 00:27:40.976 Hunk #1 succeeded at 60 (offset 1 line). 00:27:40.976 12:50:00 -- common/autobuild_common.sh@177 -- $ dpdk_kmods=false 00:27:40.976 12:50:00 -- common/autobuild_common.sh@178 -- $ uname -s 00:27:40.976 12:50:00 -- common/autobuild_common.sh@178 -- $ '[' Linux = FreeBSD ']' 00:27:40.976 12:50:00 -- common/autobuild_common.sh@182 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:27:40.976 12:50:00 -- common/autobuild_common.sh@182 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:27:46.244 The Meson build system 00:27:46.244 Version: 1.3.1 00:27:46.244 Source dir: /home/vagrant/spdk_repo/dpdk 00:27:46.245 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:27:46.245 Build type: native build 00:27:46.245 Program cat found: YES (/usr/bin/cat) 00:27:46.245 Project name: DPDK 00:27:46.245 Project version: 22.11.4 00:27:46.245 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:27:46.245 C linker for the host machine: gcc ld.bfd 2.39-16 00:27:46.245 Host machine cpu family: x86_64 00:27:46.245 Host machine cpu: x86_64 00:27:46.245 Message: ## Building in Developer Mode ## 00:27:46.245 Program pkg-config found: YES (/usr/bin/pkg-config) 00:27:46.245 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:27:46.245 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:27:46.245 Program objdump found: YES (/usr/bin/objdump) 00:27:46.245 Program python3 found: YES (/usr/bin/python3) 00:27:46.245 Program cat found: YES (/usr/bin/cat) 00:27:46.245 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:27:46.245 Checking for size of "void *" : 8 00:27:46.245 Checking for size of "void *" : 8 (cached) 00:27:46.245 Library m found: YES 00:27:46.245 Library numa found: YES 00:27:46.245 Has header "numaif.h" : YES 00:27:46.245 Library fdt found: NO 00:27:46.245 Library execinfo found: NO 00:27:46.245 Has header "execinfo.h" : YES 00:27:46.245 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:27:46.245 Run-time dependency libarchive found: NO (tried pkgconfig) 00:27:46.245 Run-time dependency libbsd found: NO (tried pkgconfig) 00:27:46.245 Run-time dependency jansson found: NO (tried pkgconfig) 00:27:46.245 Run-time dependency openssl found: YES 3.0.9 00:27:46.245 Run-time dependency libpcap found: YES 1.10.4 00:27:46.245 Has header "pcap.h" with dependency libpcap: YES 00:27:46.245 Compiler for C supports arguments -Wcast-qual: YES 00:27:46.245 Compiler for C supports arguments -Wdeprecated: YES 00:27:46.245 Compiler for C supports arguments -Wformat: YES 00:27:46.245 Compiler for C supports arguments -Wformat-nonliteral: NO 00:27:46.245 Compiler for C supports arguments -Wformat-security: NO 00:27:46.245 Compiler for C supports arguments -Wmissing-declarations: YES 00:27:46.245 Compiler for C supports arguments -Wmissing-prototypes: YES 00:27:46.245 Compiler for C supports arguments -Wnested-externs: YES 00:27:46.245 Compiler for C supports arguments -Wold-style-definition: YES 00:27:46.245 Compiler for C supports arguments -Wpointer-arith: YES 00:27:46.245 Compiler for C supports arguments -Wsign-compare: YES 00:27:46.245 Compiler for C supports arguments -Wstrict-prototypes: YES 00:27:46.245 Compiler for C supports arguments -Wundef: YES 00:27:46.245 Compiler for C supports arguments -Wwrite-strings: YES 00:27:46.245 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:27:46.245 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:27:46.245 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:27:46.245 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:27:46.245 Compiler for C supports arguments -mavx512f: YES 00:27:46.245 Checking if "AVX512 checking" compiles: YES 00:27:46.245 Fetching value of define "__SSE4_2__" : 1 00:27:46.245 Fetching value of define "__AES__" : 1 00:27:46.245 Fetching value of define "__AVX__" : 1 00:27:46.245 Fetching value of define "__AVX2__" : 1 00:27:46.245 Fetching value of define "__AVX512BW__" : (undefined) 00:27:46.245 Fetching value of define "__AVX512CD__" : (undefined) 00:27:46.245 Fetching value of define "__AVX512DQ__" : (undefined) 00:27:46.245 Fetching value of define "__AVX512F__" : (undefined) 00:27:46.245 Fetching value of define "__AVX512VL__" : (undefined) 00:27:46.245 Fetching value of define "__PCLMUL__" : 1 00:27:46.245 Fetching value of define "__RDRND__" : 1 00:27:46.245 Fetching value of define "__RDSEED__" : 1 00:27:46.245 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:27:46.245 Compiler for C supports arguments -Wno-format-truncation: YES 00:27:46.245 Message: lib/kvargs: Defining dependency "kvargs" 00:27:46.245 Message: lib/telemetry: Defining dependency "telemetry" 00:27:46.245 Checking for function "getentropy" : YES 00:27:46.245 Message: lib/eal: Defining dependency "eal" 00:27:46.245 Message: lib/ring: Defining dependency "ring" 00:27:46.245 Message: lib/rcu: Defining dependency "rcu" 00:27:46.245 Message: lib/mempool: Defining dependency "mempool" 00:27:46.245 Message: lib/mbuf: Defining dependency "mbuf" 00:27:46.245 Fetching value of define "__PCLMUL__" : 1 (cached) 00:27:46.245 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:27:46.245 Compiler for C supports arguments -mpclmul: YES 00:27:46.245 Compiler for C supports arguments -maes: YES 00:27:46.245 Compiler for C supports arguments -mavx512f: YES (cached) 00:27:46.245 Compiler for C supports arguments -mavx512bw: YES 00:27:46.245 Compiler for C supports arguments -mavx512dq: YES 00:27:46.245 Compiler for C supports arguments -mavx512vl: YES 00:27:46.245 Compiler for C supports arguments -mvpclmulqdq: YES 00:27:46.245 Compiler for C supports arguments -mavx2: YES 00:27:46.245 Compiler for C supports arguments -mavx: YES 00:27:46.245 Message: lib/net: Defining dependency "net" 00:27:46.245 Message: lib/meter: Defining dependency "meter" 00:27:46.245 Message: lib/ethdev: Defining dependency "ethdev" 00:27:46.245 Message: lib/pci: Defining dependency "pci" 00:27:46.245 Message: lib/cmdline: Defining dependency "cmdline" 00:27:46.245 Message: lib/metrics: Defining dependency "metrics" 00:27:46.245 Message: lib/hash: Defining dependency "hash" 00:27:46.245 Message: lib/timer: Defining dependency "timer" 00:27:46.245 Fetching value of define "__AVX2__" : 1 (cached) 00:27:46.245 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:27:46.245 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:27:46.245 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:27:46.245 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:27:46.245 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:27:46.245 Message: lib/acl: Defining dependency "acl" 00:27:46.245 Message: lib/bbdev: Defining dependency "bbdev" 00:27:46.245 Message: lib/bitratestats: Defining dependency "bitratestats" 00:27:46.245 Run-time dependency libelf found: YES 0.190 00:27:46.245 Message: lib/bpf: Defining dependency "bpf" 00:27:46.245 Message: lib/cfgfile: Defining dependency "cfgfile" 00:27:46.245 Message: lib/compressdev: Defining dependency "compressdev" 00:27:46.245 Message: lib/cryptodev: Defining dependency "cryptodev" 00:27:46.245 Message: lib/distributor: Defining dependency "distributor" 00:27:46.245 Message: lib/efd: Defining dependency "efd" 00:27:46.245 Message: lib/eventdev: Defining dependency "eventdev" 00:27:46.245 Message: lib/gpudev: Defining dependency "gpudev" 00:27:46.245 Message: lib/gro: Defining dependency "gro" 00:27:46.245 Message: lib/gso: Defining dependency "gso" 00:27:46.245 Message: lib/ip_frag: Defining dependency "ip_frag" 00:27:46.245 Message: lib/jobstats: Defining dependency "jobstats" 00:27:46.245 Message: lib/latencystats: Defining dependency "latencystats" 00:27:46.245 Message: lib/lpm: Defining dependency "lpm" 00:27:46.245 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:27:46.245 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:27:46.245 Fetching value of define "__AVX512IFMA__" : (undefined) 00:27:46.245 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:27:46.245 Message: lib/member: Defining dependency "member" 00:27:46.245 Message: lib/pcapng: Defining dependency "pcapng" 00:27:46.245 Compiler for C supports arguments -Wno-cast-qual: YES 00:27:46.245 Message: lib/power: Defining dependency "power" 00:27:46.245 Message: lib/rawdev: Defining dependency "rawdev" 00:27:46.245 Message: lib/regexdev: Defining dependency "regexdev" 00:27:46.245 Message: lib/dmadev: Defining dependency "dmadev" 00:27:46.245 Message: lib/rib: Defining dependency "rib" 00:27:46.245 Message: lib/reorder: Defining dependency "reorder" 00:27:46.245 Message: lib/sched: Defining dependency "sched" 00:27:46.245 Message: lib/security: Defining dependency "security" 00:27:46.245 Message: lib/stack: Defining dependency "stack" 00:27:46.245 Has header "linux/userfaultfd.h" : YES 00:27:46.245 Message: lib/vhost: Defining dependency "vhost" 00:27:46.245 Message: lib/ipsec: Defining dependency "ipsec" 00:27:46.245 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:27:46.245 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:27:46.245 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:27:46.245 Compiler for C supports arguments -mavx512bw: YES (cached) 00:27:46.245 Message: lib/fib: Defining dependency "fib" 00:27:46.245 Message: lib/port: Defining dependency "port" 00:27:46.245 Message: lib/pdump: Defining dependency "pdump" 00:27:46.245 Message: lib/table: Defining dependency "table" 00:27:46.245 Message: lib/pipeline: Defining dependency "pipeline" 00:27:46.245 Message: lib/graph: Defining dependency "graph" 00:27:46.245 Message: lib/node: Defining dependency "node" 00:27:46.245 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:27:46.245 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:27:46.245 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:27:46.245 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:27:46.245 Compiler for C supports arguments -Wno-sign-compare: YES 00:27:46.245 Compiler for C supports arguments -Wno-unused-value: YES 00:27:46.245 Compiler for C supports arguments -Wno-format: YES 00:27:46.245 Compiler for C supports arguments -Wno-format-security: YES 00:27:46.245 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:27:47.623 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:27:47.623 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:27:47.623 Compiler for C supports arguments -Wno-unused-parameter: YES 00:27:47.623 Fetching value of define "__AVX2__" : 1 (cached) 00:27:47.623 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:27:47.623 Compiler for C supports arguments -mavx512f: YES (cached) 00:27:47.623 Compiler for C supports arguments -mavx512bw: YES (cached) 00:27:47.623 Compiler for C supports arguments -march=skylake-avx512: YES 00:27:47.623 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:27:47.623 Program doxygen found: YES (/usr/bin/doxygen) 00:27:47.623 Configuring doxy-api.conf using configuration 00:27:47.623 Program sphinx-build found: NO 00:27:47.623 Configuring rte_build_config.h using configuration 00:27:47.623 Message: 00:27:47.623 ================= 00:27:47.623 Applications Enabled 00:27:47.623 ================= 00:27:47.623 00:27:47.623 apps: 00:27:47.623 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:27:47.623 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:27:47.623 test-security-perf, 00:27:47.623 00:27:47.623 Message: 00:27:47.623 ================= 00:27:47.623 Libraries Enabled 00:27:47.623 ================= 00:27:47.623 00:27:47.623 libs: 00:27:47.623 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:27:47.623 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:27:47.623 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:27:47.623 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:27:47.623 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:27:47.623 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:27:47.624 table, pipeline, graph, node, 00:27:47.624 00:27:47.624 Message: 00:27:47.624 =============== 00:27:47.624 Drivers Enabled 00:27:47.624 =============== 00:27:47.624 00:27:47.624 common: 00:27:47.624 00:27:47.624 bus: 00:27:47.624 pci, vdev, 00:27:47.624 mempool: 00:27:47.624 ring, 00:27:47.624 dma: 00:27:47.624 00:27:47.624 net: 00:27:47.624 i40e, 00:27:47.624 raw: 00:27:47.624 00:27:47.624 crypto: 00:27:47.624 00:27:47.624 compress: 00:27:47.624 00:27:47.624 regex: 00:27:47.624 00:27:47.624 vdpa: 00:27:47.624 00:27:47.624 event: 00:27:47.624 00:27:47.624 baseband: 00:27:47.624 00:27:47.624 gpu: 00:27:47.624 00:27:47.624 00:27:47.624 Message: 00:27:47.624 ================= 00:27:47.624 Content Skipped 00:27:47.624 ================= 00:27:47.624 00:27:47.624 apps: 00:27:47.624 00:27:47.624 libs: 00:27:47.624 kni: explicitly disabled via build config (deprecated lib) 00:27:47.624 flow_classify: explicitly disabled via build config (deprecated lib) 00:27:47.624 00:27:47.624 drivers: 00:27:47.624 common/cpt: not in enabled drivers build config 00:27:47.624 common/dpaax: not in enabled drivers build config 00:27:47.624 common/iavf: not in enabled drivers build config 00:27:47.624 common/idpf: not in enabled drivers build config 00:27:47.624 common/mvep: not in enabled drivers build config 00:27:47.624 common/octeontx: not in enabled drivers build config 00:27:47.624 bus/auxiliary: not in enabled drivers build config 00:27:47.624 bus/dpaa: not in enabled drivers build config 00:27:47.624 bus/fslmc: not in enabled drivers build config 00:27:47.624 bus/ifpga: not in enabled drivers build config 00:27:47.624 bus/vmbus: not in enabled drivers build config 00:27:47.624 common/cnxk: not in enabled drivers build config 00:27:47.624 common/mlx5: not in enabled drivers build config 00:27:47.624 common/qat: not in enabled drivers build config 00:27:47.624 common/sfc_efx: not in enabled drivers build config 00:27:47.624 mempool/bucket: not in enabled drivers build config 00:27:47.624 mempool/cnxk: not in enabled drivers build config 00:27:47.624 mempool/dpaa: not in enabled drivers build config 00:27:47.624 mempool/dpaa2: not in enabled drivers build config 00:27:47.624 mempool/octeontx: not in enabled drivers build config 00:27:47.624 mempool/stack: not in enabled drivers build config 00:27:47.624 dma/cnxk: not in enabled drivers build config 00:27:47.624 dma/dpaa: not in enabled drivers build config 00:27:47.624 dma/dpaa2: not in enabled drivers build config 00:27:47.624 dma/hisilicon: not in enabled drivers build config 00:27:47.624 dma/idxd: not in enabled drivers build config 00:27:47.624 dma/ioat: not in enabled drivers build config 00:27:47.624 dma/skeleton: not in enabled drivers build config 00:27:47.624 net/af_packet: not in enabled drivers build config 00:27:47.624 net/af_xdp: not in enabled drivers build config 00:27:47.624 net/ark: not in enabled drivers build config 00:27:47.624 net/atlantic: not in enabled drivers build config 00:27:47.624 net/avp: not in enabled drivers build config 00:27:47.624 net/axgbe: not in enabled drivers build config 00:27:47.624 net/bnx2x: not in enabled drivers build config 00:27:47.624 net/bnxt: not in enabled drivers build config 00:27:47.624 net/bonding: not in enabled drivers build config 00:27:47.624 net/cnxk: not in enabled drivers build config 00:27:47.624 net/cxgbe: not in enabled drivers build config 00:27:47.624 net/dpaa: not in enabled drivers build config 00:27:47.624 net/dpaa2: not in enabled drivers build config 00:27:47.624 net/e1000: not in enabled drivers build config 00:27:47.624 net/ena: not in enabled drivers build config 00:27:47.624 net/enetc: not in enabled drivers build config 00:27:47.624 net/enetfec: not in enabled drivers build config 00:27:47.624 net/enic: not in enabled drivers build config 00:27:47.624 net/failsafe: not in enabled drivers build config 00:27:47.624 net/fm10k: not in enabled drivers build config 00:27:47.624 net/gve: not in enabled drivers build config 00:27:47.624 net/hinic: not in enabled drivers build config 00:27:47.624 net/hns3: not in enabled drivers build config 00:27:47.624 net/iavf: not in enabled drivers build config 00:27:47.624 net/ice: not in enabled drivers build config 00:27:47.624 net/idpf: not in enabled drivers build config 00:27:47.624 net/igc: not in enabled drivers build config 00:27:47.624 net/ionic: not in enabled drivers build config 00:27:47.624 net/ipn3ke: not in enabled drivers build config 00:27:47.624 net/ixgbe: not in enabled drivers build config 00:27:47.624 net/kni: not in enabled drivers build config 00:27:47.624 net/liquidio: not in enabled drivers build config 00:27:47.624 net/mana: not in enabled drivers build config 00:27:47.624 net/memif: not in enabled drivers build config 00:27:47.624 net/mlx4: not in enabled drivers build config 00:27:47.624 net/mlx5: not in enabled drivers build config 00:27:47.624 net/mvneta: not in enabled drivers build config 00:27:47.624 net/mvpp2: not in enabled drivers build config 00:27:47.624 net/netvsc: not in enabled drivers build config 00:27:47.624 net/nfb: not in enabled drivers build config 00:27:47.624 net/nfp: not in enabled drivers build config 00:27:47.624 net/ngbe: not in enabled drivers build config 00:27:47.624 net/null: not in enabled drivers build config 00:27:47.624 net/octeontx: not in enabled drivers build config 00:27:47.624 net/octeon_ep: not in enabled drivers build config 00:27:47.624 net/pcap: not in enabled drivers build config 00:27:47.624 net/pfe: not in enabled drivers build config 00:27:47.624 net/qede: not in enabled drivers build config 00:27:47.624 net/ring: not in enabled drivers build config 00:27:47.624 net/sfc: not in enabled drivers build config 00:27:47.624 net/softnic: not in enabled drivers build config 00:27:47.624 net/tap: not in enabled drivers build config 00:27:47.624 net/thunderx: not in enabled drivers build config 00:27:47.624 net/txgbe: not in enabled drivers build config 00:27:47.624 net/vdev_netvsc: not in enabled drivers build config 00:27:47.624 net/vhost: not in enabled drivers build config 00:27:47.624 net/virtio: not in enabled drivers build config 00:27:47.624 net/vmxnet3: not in enabled drivers build config 00:27:47.624 raw/cnxk_bphy: not in enabled drivers build config 00:27:47.624 raw/cnxk_gpio: not in enabled drivers build config 00:27:47.624 raw/dpaa2_cmdif: not in enabled drivers build config 00:27:47.624 raw/ifpga: not in enabled drivers build config 00:27:47.624 raw/ntb: not in enabled drivers build config 00:27:47.624 raw/skeleton: not in enabled drivers build config 00:27:47.624 crypto/armv8: not in enabled drivers build config 00:27:47.624 crypto/bcmfs: not in enabled drivers build config 00:27:47.624 crypto/caam_jr: not in enabled drivers build config 00:27:47.624 crypto/ccp: not in enabled drivers build config 00:27:47.624 crypto/cnxk: not in enabled drivers build config 00:27:47.624 crypto/dpaa_sec: not in enabled drivers build config 00:27:47.624 crypto/dpaa2_sec: not in enabled drivers build config 00:27:47.624 crypto/ipsec_mb: not in enabled drivers build config 00:27:47.624 crypto/mlx5: not in enabled drivers build config 00:27:47.624 crypto/mvsam: not in enabled drivers build config 00:27:47.624 crypto/nitrox: not in enabled drivers build config 00:27:47.624 crypto/null: not in enabled drivers build config 00:27:47.624 crypto/octeontx: not in enabled drivers build config 00:27:47.624 crypto/openssl: not in enabled drivers build config 00:27:47.624 crypto/scheduler: not in enabled drivers build config 00:27:47.624 crypto/uadk: not in enabled drivers build config 00:27:47.624 crypto/virtio: not in enabled drivers build config 00:27:47.624 compress/isal: not in enabled drivers build config 00:27:47.624 compress/mlx5: not in enabled drivers build config 00:27:47.624 compress/octeontx: not in enabled drivers build config 00:27:47.624 compress/zlib: not in enabled drivers build config 00:27:47.624 regex/mlx5: not in enabled drivers build config 00:27:47.624 regex/cn9k: not in enabled drivers build config 00:27:47.624 vdpa/ifc: not in enabled drivers build config 00:27:47.624 vdpa/mlx5: not in enabled drivers build config 00:27:47.624 vdpa/sfc: not in enabled drivers build config 00:27:47.624 event/cnxk: not in enabled drivers build config 00:27:47.624 event/dlb2: not in enabled drivers build config 00:27:47.624 event/dpaa: not in enabled drivers build config 00:27:47.624 event/dpaa2: not in enabled drivers build config 00:27:47.624 event/dsw: not in enabled drivers build config 00:27:47.624 event/opdl: not in enabled drivers build config 00:27:47.624 event/skeleton: not in enabled drivers build config 00:27:47.624 event/sw: not in enabled drivers build config 00:27:47.624 event/octeontx: not in enabled drivers build config 00:27:47.624 baseband/acc: not in enabled drivers build config 00:27:47.624 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:27:47.624 baseband/fpga_lte_fec: not in enabled drivers build config 00:27:47.624 baseband/la12xx: not in enabled drivers build config 00:27:47.624 baseband/null: not in enabled drivers build config 00:27:47.624 baseband/turbo_sw: not in enabled drivers build config 00:27:47.624 gpu/cuda: not in enabled drivers build config 00:27:47.624 00:27:47.624 00:27:47.624 Build targets in project: 314 00:27:47.624 00:27:47.624 DPDK 22.11.4 00:27:47.624 00:27:47.624 User defined options 00:27:47.624 libdir : lib 00:27:47.624 prefix : /home/vagrant/spdk_repo/dpdk/build 00:27:47.624 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:27:47.624 c_link_args : 00:27:47.624 enable_docs : false 00:27:47.624 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:27:47.624 enable_kmods : false 00:27:47.624 machine : native 00:27:47.624 tests : false 00:27:47.624 00:27:47.624 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:27:47.625 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:27:47.625 12:50:06 -- common/autobuild_common.sh@186 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:27:47.625 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:27:47.625 [1/743] Generating lib/rte_kvargs_def with a custom command 00:27:47.625 [2/743] Generating lib/rte_kvargs_mingw with a custom command 00:27:47.625 [3/743] Generating lib/rte_telemetry_def with a custom command 00:27:47.625 [4/743] Generating lib/rte_telemetry_mingw with a custom command 00:27:47.625 [5/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:27:47.625 [6/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:27:47.625 [7/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:27:47.625 [8/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:27:47.625 [9/743] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:27:47.884 [10/743] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:27:47.884 [11/743] Linking static target lib/librte_kvargs.a 00:27:47.884 [12/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:27:47.884 [13/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:27:47.884 [14/743] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:27:47.884 [15/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:27:47.884 [16/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:27:47.884 [17/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:27:47.884 [18/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:27:47.884 [19/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:27:48.143 [20/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:27:48.143 [21/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:27:48.143 [22/743] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:27:48.143 [23/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:27:48.143 [24/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:27:48.143 [25/743] Linking target lib/librte_kvargs.so.23.0 00:27:48.143 [26/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:27:48.143 [27/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:27:48.143 [28/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:27:48.143 [29/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:27:48.143 [30/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:27:48.143 [31/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:27:48.143 [32/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:27:48.402 [33/743] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:27:48.402 [34/743] Linking static target lib/librte_telemetry.a 00:27:48.402 [35/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:27:48.402 [36/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:27:48.402 [37/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:27:48.402 [38/743] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:27:48.402 [39/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:27:48.402 [40/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:27:48.402 [41/743] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:27:48.661 [42/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:27:48.661 [43/743] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:27:48.661 [44/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:27:48.661 [45/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:27:48.661 [46/743] Linking target lib/librte_telemetry.so.23.0 00:27:48.661 [47/743] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:27:48.661 [48/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:27:48.661 [49/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:27:48.661 [50/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:27:48.661 [51/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:27:48.920 [52/743] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:27:48.920 [53/743] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:27:48.920 [54/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:27:48.920 [55/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:27:48.920 [56/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:27:48.920 [57/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:27:48.920 [58/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:27:48.920 [59/743] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:27:48.920 [60/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:27:48.920 [61/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:27:48.920 [62/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:27:48.920 [63/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:27:48.920 [64/743] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:27:48.920 [65/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:27:48.920 [66/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:27:49.179 [67/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:27:49.179 [68/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:27:49.179 [69/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:27:49.179 [70/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:27:49.179 [71/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:27:49.179 [72/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:27:49.179 [73/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:27:49.179 [74/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:27:49.179 [75/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:27:49.179 [76/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:27:49.179 [77/743] Generating lib/rte_eal_def with a custom command 00:27:49.179 [78/743] Generating lib/rte_eal_mingw with a custom command 00:27:49.179 [79/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:27:49.179 [80/743] Generating lib/rte_ring_def with a custom command 00:27:49.179 [81/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:27:49.179 [82/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:27:49.179 [83/743] Generating lib/rte_ring_mingw with a custom command 00:27:49.179 [84/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:27:49.179 [85/743] Generating lib/rte_rcu_def with a custom command 00:27:49.179 [86/743] Generating lib/rte_rcu_mingw with a custom command 00:27:49.438 [87/743] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:27:49.438 [88/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:27:49.438 [89/743] Linking static target lib/librte_ring.a 00:27:49.438 [90/743] Generating lib/rte_mempool_def with a custom command 00:27:49.438 [91/743] Generating lib/rte_mempool_mingw with a custom command 00:27:49.438 [92/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:27:49.438 [93/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:27:49.697 [94/743] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:27:49.697 [95/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:27:49.697 [96/743] Linking static target lib/librte_eal.a 00:27:49.956 [97/743] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:27:49.956 [98/743] Generating lib/rte_mbuf_def with a custom command 00:27:49.956 [99/743] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:27:49.956 [100/743] Generating lib/rte_mbuf_mingw with a custom command 00:27:49.956 [101/743] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:27:50.215 [102/743] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:27:50.215 [103/743] Linking static target lib/librte_rcu.a 00:27:50.215 [104/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:27:50.215 [105/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:27:50.215 [106/743] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:27:50.474 [107/743] Linking static target lib/librte_mempool.a 00:27:50.474 [108/743] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:27:50.474 [109/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:27:50.474 [110/743] Generating lib/rte_net_def with a custom command 00:27:50.474 [111/743] Generating lib/rte_net_mingw with a custom command 00:27:50.474 [112/743] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:27:50.474 [113/743] Linking static target lib/net/libnet_crc_avx512_lib.a 00:27:50.474 [114/743] Generating lib/rte_meter_def with a custom command 00:27:50.474 [115/743] Generating lib/rte_meter_mingw with a custom command 00:27:50.733 [116/743] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:27:50.733 [117/743] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:27:50.733 [118/743] Linking static target lib/librte_meter.a 00:27:50.733 [119/743] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:27:50.733 [120/743] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:27:50.992 [121/743] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:27:50.992 [122/743] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:27:50.992 [123/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:27:50.992 [124/743] Linking static target lib/librte_mbuf.a 00:27:50.992 [125/743] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:27:50.992 [126/743] Linking static target lib/librte_net.a 00:27:50.992 [127/743] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:27:51.251 [128/743] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:27:51.251 [129/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:27:51.251 [130/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:27:51.510 [131/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:27:51.510 [132/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:27:51.510 [133/743] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:27:51.510 [134/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:27:51.769 [135/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:27:52.028 [136/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:27:52.028 [137/743] Generating lib/rte_ethdev_def with a custom command 00:27:52.028 [138/743] Generating lib/rte_ethdev_mingw with a custom command 00:27:52.028 [139/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:27:52.028 [140/743] Generating lib/rte_pci_def with a custom command 00:27:52.287 [141/743] Generating lib/rte_pci_mingw with a custom command 00:27:52.287 [142/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:27:52.287 [143/743] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:27:52.287 [144/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:27:52.287 [145/743] Linking static target lib/librte_pci.a 00:27:52.287 [146/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:27:52.287 [147/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:27:52.287 [148/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:27:52.287 [149/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:27:52.287 [150/743] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:27:52.546 [151/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:27:52.546 [152/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:27:52.546 [153/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:27:52.546 [154/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:27:52.546 [155/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:27:52.546 [156/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:27:52.546 [157/743] Generating lib/rte_cmdline_def with a custom command 00:27:52.546 [158/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:27:52.546 [159/743] Generating lib/rte_cmdline_mingw with a custom command 00:27:52.546 [160/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:27:52.546 [161/743] Generating lib/rte_metrics_def with a custom command 00:27:52.546 [162/743] Generating lib/rte_metrics_mingw with a custom command 00:27:52.805 [163/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:27:52.805 [164/743] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:27:52.805 [165/743] Generating lib/rte_hash_def with a custom command 00:27:52.805 [166/743] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:27:52.805 [167/743] Generating lib/rte_hash_mingw with a custom command 00:27:52.805 [168/743] Generating lib/rte_timer_def with a custom command 00:27:52.805 [169/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:27:52.805 [170/743] Generating lib/rte_timer_mingw with a custom command 00:27:52.805 [171/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:27:52.805 [172/743] Linking static target lib/librte_cmdline.a 00:27:52.805 [173/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:27:53.374 [174/743] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:27:53.374 [175/743] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:27:53.374 [176/743] Linking static target lib/librte_metrics.a 00:27:53.374 [177/743] Linking static target lib/librte_timer.a 00:27:53.640 [178/743] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:27:53.640 [179/743] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:27:53.640 [180/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:27:53.640 [181/743] Linking static target lib/librte_ethdev.a 00:27:53.640 [182/743] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:27:53.640 [183/743] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:27:53.640 [184/743] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:27:54.209 [185/743] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:27:54.209 [186/743] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:27:54.209 [187/743] Generating lib/rte_acl_def with a custom command 00:27:54.209 [188/743] Generating lib/rte_acl_mingw with a custom command 00:27:54.468 [189/743] Generating lib/rte_bbdev_def with a custom command 00:27:54.468 [190/743] Generating lib/rte_bbdev_mingw with a custom command 00:27:54.468 [191/743] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:27:54.468 [192/743] Generating lib/rte_bitratestats_def with a custom command 00:27:54.468 [193/743] Generating lib/rte_bitratestats_mingw with a custom command 00:27:54.728 [194/743] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:27:54.987 [195/743] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:27:54.987 [196/743] Linking static target lib/librte_bitratestats.a 00:27:55.246 [197/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:27:55.246 [198/743] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:27:55.246 [199/743] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:27:55.246 [200/743] Linking static target lib/librte_bbdev.a 00:27:55.246 [201/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:27:55.505 [202/743] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:27:55.505 [203/743] Linking static target lib/librte_hash.a 00:27:55.763 [204/743] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:27:55.763 [205/743] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:27:55.763 [206/743] Linking static target lib/acl/libavx512_tmp.a 00:27:55.763 [207/743] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:27:55.763 [208/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:27:56.035 [209/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:27:56.303 [210/743] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:27:56.303 [211/743] Generating lib/rte_bpf_def with a custom command 00:27:56.303 [212/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:27:56.303 [213/743] Generating lib/rte_bpf_mingw with a custom command 00:27:56.303 [214/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:27:56.303 [215/743] Generating lib/rte_cfgfile_def with a custom command 00:27:56.303 [216/743] Generating lib/rte_cfgfile_mingw with a custom command 00:27:56.569 [217/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:27:56.569 [218/743] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:27:56.569 [219/743] Linking static target lib/librte_acl.a 00:27:56.569 [220/743] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:27:56.569 [221/743] Linking static target lib/librte_cfgfile.a 00:27:56.836 [222/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:27:56.836 [223/743] Generating lib/rte_compressdev_def with a custom command 00:27:56.836 [224/743] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:27:56.836 [225/743] Generating lib/rte_compressdev_mingw with a custom command 00:27:56.836 [226/743] Linking target lib/librte_eal.so.23.0 00:27:56.836 [227/743] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:27:57.100 [228/743] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:27:57.100 [229/743] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:27:57.100 [230/743] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:27:57.100 [231/743] Linking target lib/librte_ring.so.23.0 00:27:57.100 [232/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:27:57.100 [233/743] Linking target lib/librte_meter.so.23.0 00:27:57.100 [234/743] Linking target lib/librte_pci.so.23.0 00:27:57.100 [235/743] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:27:57.100 [236/743] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:27:57.100 [237/743] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:27:57.364 [238/743] Linking target lib/librte_rcu.so.23.0 00:27:57.364 [239/743] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:27:57.364 [240/743] Linking target lib/librte_mempool.so.23.0 00:27:57.364 [241/743] Linking target lib/librte_timer.so.23.0 00:27:57.364 [242/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:27:57.364 [243/743] Linking target lib/librte_acl.so.23.0 00:27:57.364 [244/743] Linking static target lib/librte_bpf.a 00:27:57.364 [245/743] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:27:57.364 [246/743] Linking target lib/librte_cfgfile.so.23.0 00:27:57.364 [247/743] Linking static target lib/librte_compressdev.a 00:27:57.364 [248/743] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:27:57.365 [249/743] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:27:57.365 [250/743] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:27:57.365 [251/743] Generating lib/rte_cryptodev_def with a custom command 00:27:57.365 [252/743] Generating lib/rte_cryptodev_mingw with a custom command 00:27:57.365 [253/743] Linking target lib/librte_mbuf.so.23.0 00:27:57.365 [254/743] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:27:57.365 [255/743] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:27:57.628 [256/743] Generating lib/rte_distributor_def with a custom command 00:27:57.628 [257/743] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:27:57.628 [258/743] Generating lib/rte_distributor_mingw with a custom command 00:27:57.628 [259/743] Linking target lib/librte_net.so.23.0 00:27:57.628 [260/743] Linking target lib/librte_bbdev.so.23.0 00:27:57.628 [261/743] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:27:57.628 [262/743] Generating lib/rte_efd_def with a custom command 00:27:57.628 [263/743] Generating lib/rte_efd_mingw with a custom command 00:27:57.628 [264/743] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:27:57.628 [265/743] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:27:57.887 [266/743] Linking target lib/librte_cmdline.so.23.0 00:27:57.887 [267/743] Linking target lib/librte_hash.so.23.0 00:27:57.887 [268/743] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:27:58.145 [269/743] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:27:58.145 [270/743] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:27:58.145 [271/743] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:27:58.145 [272/743] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:27:58.145 [273/743] Linking target lib/librte_ethdev.so.23.0 00:27:58.404 [274/743] Linking target lib/librte_compressdev.so.23.0 00:27:58.404 [275/743] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:27:58.404 [276/743] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:27:58.404 [277/743] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:27:58.404 [278/743] Linking target lib/librte_metrics.so.23.0 00:27:58.404 [279/743] Linking target lib/librte_bpf.so.23.0 00:27:58.404 [280/743] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:27:58.404 [281/743] Linking static target lib/librte_distributor.a 00:27:58.663 [282/743] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:27:58.663 [283/743] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:27:58.663 [284/743] Linking target lib/librte_bitratestats.so.23.0 00:27:58.663 [285/743] Generating lib/rte_eventdev_def with a custom command 00:27:58.663 [286/743] Generating lib/rte_eventdev_mingw with a custom command 00:27:58.663 [287/743] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:27:58.663 [288/743] Linking target lib/librte_distributor.so.23.0 00:27:58.663 [289/743] Generating lib/rte_gpudev_def with a custom command 00:27:58.921 [290/743] Generating lib/rte_gpudev_mingw with a custom command 00:27:58.921 [291/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:27:59.180 [292/743] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:27:59.180 [293/743] Linking static target lib/librte_efd.a 00:27:59.439 [294/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:27:59.439 [295/743] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:27:59.439 [296/743] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:27:59.439 [297/743] Linking static target lib/librte_cryptodev.a 00:27:59.439 [298/743] Linking target lib/librte_efd.so.23.0 00:27:59.697 [299/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:27:59.697 [300/743] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:27:59.697 [301/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:27:59.697 [302/743] Linking static target lib/librte_gpudev.a 00:27:59.697 [303/743] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:27:59.697 [304/743] Generating lib/rte_gro_def with a custom command 00:27:59.697 [305/743] Generating lib/rte_gro_mingw with a custom command 00:27:59.697 [306/743] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:27:59.954 [307/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:28:00.211 [308/743] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:28:00.211 [309/743] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:28:00.211 [310/743] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:28:00.211 [311/743] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:28:00.211 [312/743] Generating lib/rte_gso_def with a custom command 00:28:00.211 [313/743] Generating lib/rte_gso_mingw with a custom command 00:28:00.469 [314/743] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:28:00.469 [315/743] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:28:00.469 [316/743] Linking target lib/librte_gpudev.so.23.0 00:28:00.469 [317/743] Linking static target lib/librte_gro.a 00:28:00.469 [318/743] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:28:00.726 [319/743] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:28:00.726 [320/743] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:28:00.726 [321/743] Linking target lib/librte_gro.so.23.0 00:28:00.726 [322/743] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:28:00.726 [323/743] Generating lib/rte_ip_frag_def with a custom command 00:28:00.727 [324/743] Generating lib/rte_ip_frag_mingw with a custom command 00:28:00.985 [325/743] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:28:00.985 [326/743] Linking static target lib/librte_gso.a 00:28:00.985 [327/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:28:00.985 [328/743] Linking static target lib/librte_eventdev.a 00:28:00.985 [329/743] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:28:00.985 [330/743] Linking static target lib/librte_jobstats.a 00:28:00.985 [331/743] Generating lib/rte_jobstats_def with a custom command 00:28:00.985 [332/743] Generating lib/rte_jobstats_mingw with a custom command 00:28:00.985 [333/743] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:28:01.243 [334/743] Linking target lib/librte_gso.so.23.0 00:28:01.243 [335/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:28:01.243 [336/743] Generating lib/rte_latencystats_def with a custom command 00:28:01.243 [337/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:28:01.243 [338/743] Generating lib/rte_latencystats_mingw with a custom command 00:28:01.243 [339/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:28:01.243 [340/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:28:01.243 [341/743] Generating lib/rte_lpm_def with a custom command 00:28:01.243 [342/743] Generating lib/rte_lpm_mingw with a custom command 00:28:01.243 [343/743] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:28:01.243 [344/743] Linking target lib/librte_jobstats.so.23.0 00:28:01.501 [345/743] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:28:01.501 [346/743] Linking target lib/librte_cryptodev.so.23.0 00:28:01.501 [347/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:28:01.501 [348/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:28:01.501 [349/743] Linking static target lib/librte_ip_frag.a 00:28:01.760 [350/743] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:28:01.760 [351/743] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:28:02.019 [352/743] Linking target lib/librte_ip_frag.so.23.0 00:28:02.019 [353/743] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:28:02.019 [354/743] Linking static target lib/librte_latencystats.a 00:28:02.019 [355/743] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:28:02.019 [356/743] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:28:02.019 [357/743] Linking static target lib/member/libsketch_avx512_tmp.a 00:28:02.019 [358/743] Generating lib/rte_member_def with a custom command 00:28:02.019 [359/743] Generating lib/rte_member_mingw with a custom command 00:28:02.019 [360/743] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:28:02.019 [361/743] Generating lib/rte_pcapng_def with a custom command 00:28:02.019 [362/743] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:28:02.019 [363/743] Generating lib/rte_pcapng_mingw with a custom command 00:28:02.019 [364/743] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:28:02.277 [365/743] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:28:02.277 [366/743] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:28:02.277 [367/743] Linking target lib/librte_latencystats.so.23.0 00:28:02.277 [368/743] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:28:02.277 [369/743] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:28:02.277 [370/743] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:28:02.535 [371/743] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:28:02.535 [372/743] Linking static target lib/librte_lpm.a 00:28:02.535 [373/743] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:28:02.794 [374/743] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:28:02.794 [375/743] Generating lib/rte_power_def with a custom command 00:28:02.794 [376/743] Generating lib/rte_power_mingw with a custom command 00:28:02.794 [377/743] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:28:02.794 [378/743] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:28:02.794 [379/743] Generating lib/rte_rawdev_def with a custom command 00:28:02.794 [380/743] Linking target lib/librte_eventdev.so.23.0 00:28:02.794 [381/743] Generating lib/rte_rawdev_mingw with a custom command 00:28:03.052 [382/743] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:28:03.053 [383/743] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:28:03.053 [384/743] Linking target lib/librte_lpm.so.23.0 00:28:03.053 [385/743] Generating lib/rte_regexdev_def with a custom command 00:28:03.053 [386/743] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:28:03.053 [387/743] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:28:03.053 [388/743] Linking static target lib/librte_pcapng.a 00:28:03.053 [389/743] Generating lib/rte_regexdev_mingw with a custom command 00:28:03.053 [390/743] Generating lib/rte_dmadev_def with a custom command 00:28:03.053 [391/743] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:28:03.053 [392/743] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:28:03.053 [393/743] Generating lib/rte_dmadev_mingw with a custom command 00:28:03.053 [394/743] Generating lib/rte_rib_def with a custom command 00:28:03.053 [395/743] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:28:03.053 [396/743] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:28:03.053 [397/743] Linking static target lib/librte_rawdev.a 00:28:03.053 [398/743] Generating lib/rte_rib_mingw with a custom command 00:28:03.053 [399/743] Generating lib/rte_reorder_def with a custom command 00:28:03.311 [400/743] Generating lib/rte_reorder_mingw with a custom command 00:28:03.311 [401/743] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:28:03.311 [402/743] Linking target lib/librte_pcapng.so.23.0 00:28:03.311 [403/743] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:28:03.311 [404/743] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:28:03.311 [405/743] Linking static target lib/librte_power.a 00:28:03.311 [406/743] Linking static target lib/librte_dmadev.a 00:28:03.570 [407/743] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:28:03.570 [408/743] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:28:03.570 [409/743] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:28:03.570 [410/743] Linking target lib/librte_rawdev.so.23.0 00:28:03.570 [411/743] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:28:03.570 [412/743] Linking static target lib/librte_regexdev.a 00:28:03.570 [413/743] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:28:03.829 [414/743] Generating lib/rte_sched_def with a custom command 00:28:03.829 [415/743] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:28:03.829 [416/743] Generating lib/rte_sched_mingw with a custom command 00:28:03.829 [417/743] Generating lib/rte_security_def with a custom command 00:28:03.829 [418/743] Generating lib/rte_security_mingw with a custom command 00:28:03.829 [419/743] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:28:03.829 [420/743] Linking static target lib/librte_member.a 00:28:03.829 [421/743] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:28:03.829 [422/743] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:28:03.829 [423/743] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:28:03.829 [424/743] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:28:04.087 [425/743] Linking target lib/librte_dmadev.so.23.0 00:28:04.087 [426/743] Linking static target lib/librte_reorder.a 00:28:04.087 [427/743] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:28:04.087 [428/743] Generating lib/rte_stack_def with a custom command 00:28:04.087 [429/743] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:28:04.087 [430/743] Linking static target lib/librte_stack.a 00:28:04.087 [431/743] Generating lib/rte_stack_mingw with a custom command 00:28:04.087 [432/743] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:28:04.087 [433/743] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:28:04.087 [434/743] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:28:04.087 [435/743] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:28:04.087 [436/743] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:28:04.346 [437/743] Linking target lib/librte_member.so.23.0 00:28:04.346 [438/743] Linking target lib/librte_stack.so.23.0 00:28:04.346 [439/743] Linking target lib/librte_reorder.so.23.0 00:28:04.346 [440/743] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:28:04.346 [441/743] Linking static target lib/librte_rib.a 00:28:04.346 [442/743] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:28:04.346 [443/743] Linking target lib/librte_power.so.23.0 00:28:04.346 [444/743] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:28:04.346 [445/743] Linking target lib/librte_regexdev.so.23.0 00:28:04.604 [446/743] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:28:04.604 [447/743] Linking static target lib/librte_security.a 00:28:04.604 [448/743] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:28:04.604 [449/743] Linking target lib/librte_rib.so.23.0 00:28:04.862 [450/743] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:28:04.862 [451/743] Generating lib/rte_vhost_def with a custom command 00:28:04.862 [452/743] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:28:04.862 [453/743] Generating lib/rte_vhost_mingw with a custom command 00:28:04.862 [454/743] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:28:05.121 [455/743] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:28:05.121 [456/743] Linking target lib/librte_security.so.23.0 00:28:05.121 [457/743] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:28:05.121 [458/743] Linking static target lib/librte_sched.a 00:28:05.121 [459/743] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:28:05.121 [460/743] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:28:05.688 [461/743] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:28:05.688 [462/743] Linking target lib/librte_sched.so.23.0 00:28:05.688 [463/743] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:28:05.688 [464/743] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:28:05.688 [465/743] Generating lib/rte_ipsec_def with a custom command 00:28:05.688 [466/743] Generating lib/rte_ipsec_mingw with a custom command 00:28:05.688 [467/743] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:28:05.946 [468/743] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:28:05.946 [469/743] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:28:05.946 [470/743] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:28:05.946 [471/743] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:28:06.513 [472/743] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:28:06.513 [473/743] Generating lib/rte_fib_def with a custom command 00:28:06.513 [474/743] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:28:06.513 [475/743] Linking static target lib/fib/libtrie_avx512_tmp.a 00:28:06.513 [476/743] Generating lib/rte_fib_mingw with a custom command 00:28:06.513 [477/743] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:28:06.513 [478/743] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:28:06.513 [479/743] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:28:06.771 [480/743] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:28:06.771 [481/743] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:28:06.771 [482/743] Linking static target lib/librte_ipsec.a 00:28:07.030 [483/743] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:28:07.030 [484/743] Linking target lib/librte_ipsec.so.23.0 00:28:07.030 [485/743] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:28:07.288 [486/743] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:28:07.288 [487/743] Linking static target lib/librte_fib.a 00:28:07.288 [488/743] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:28:07.288 [489/743] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:28:07.546 [490/743] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:28:07.546 [491/743] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:28:07.546 [492/743] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:28:07.546 [493/743] Linking target lib/librte_fib.so.23.0 00:28:07.546 [494/743] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:28:08.481 [495/743] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:28:08.481 [496/743] Generating lib/rte_port_def with a custom command 00:28:08.481 [497/743] Generating lib/rte_port_mingw with a custom command 00:28:08.481 [498/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:28:08.481 [499/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:28:08.481 [500/743] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:28:08.481 [501/743] Generating lib/rte_pdump_def with a custom command 00:28:08.481 [502/743] Generating lib/rte_pdump_mingw with a custom command 00:28:08.481 [503/743] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:28:08.481 [504/743] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:28:08.739 [505/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:28:08.739 [506/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:28:08.739 [507/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:28:08.739 [508/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:28:08.739 [509/743] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:28:08.997 [510/743] Linking static target lib/librte_port.a 00:28:09.256 [511/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:28:09.256 [512/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:28:09.256 [513/743] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:28:09.256 [514/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:28:09.514 [515/743] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:28:09.514 [516/743] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:28:09.514 [517/743] Linking target lib/librte_port.so.23.0 00:28:09.514 [518/743] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:28:09.514 [519/743] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:28:09.514 [520/743] Linking static target lib/librte_pdump.a 00:28:09.773 [521/743] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:28:09.773 [522/743] Linking target lib/librte_pdump.so.23.0 00:28:10.031 [523/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:28:10.031 [524/743] Generating lib/rte_table_def with a custom command 00:28:10.031 [525/743] Generating lib/rte_table_mingw with a custom command 00:28:10.031 [526/743] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:28:10.290 [527/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:28:10.290 [528/743] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:28:10.548 [529/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:28:10.548 [530/743] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:28:10.548 [531/743] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:28:10.548 [532/743] Generating lib/rte_pipeline_def with a custom command 00:28:10.548 [533/743] Generating lib/rte_pipeline_mingw with a custom command 00:28:10.548 [534/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:28:10.548 [535/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:28:10.548 [536/743] Linking static target lib/librte_table.a 00:28:10.806 [537/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:28:11.064 [538/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:28:11.322 [539/743] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:28:11.322 [540/743] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:28:11.322 [541/743] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:28:11.322 [542/743] Linking target lib/librte_table.so.23.0 00:28:11.322 [543/743] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:28:11.322 [544/743] Generating lib/rte_graph_def with a custom command 00:28:11.581 [545/743] Generating lib/rte_graph_mingw with a custom command 00:28:11.581 [546/743] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:28:11.839 [547/743] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:28:11.839 [548/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:28:11.839 [549/743] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:28:12.097 [550/743] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:28:12.097 [551/743] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:28:12.097 [552/743] Linking static target lib/librte_graph.a 00:28:12.355 [553/743] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:28:12.355 [554/743] Compiling C object lib/librte_node.a.p/node_null.c.o 00:28:12.355 [555/743] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:28:12.613 [556/743] Compiling C object lib/librte_node.a.p/node_log.c.o 00:28:12.613 [557/743] Generating lib/rte_node_def with a custom command 00:28:12.613 [558/743] Generating lib/rte_node_mingw with a custom command 00:28:12.872 [559/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:28:12.872 [560/743] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:28:12.872 [561/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:28:12.872 [562/743] Linking target lib/librte_graph.so.23.0 00:28:13.130 [563/743] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:28:13.130 [564/743] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:28:13.130 [565/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:28:13.130 [566/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:28:13.130 [567/743] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:28:13.130 [568/743] Generating drivers/rte_bus_pci_def with a custom command 00:28:13.130 [569/743] Generating drivers/rte_bus_pci_mingw with a custom command 00:28:13.130 [570/743] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:28:13.130 [571/743] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:28:13.130 [572/743] Generating drivers/rte_bus_vdev_def with a custom command 00:28:13.388 [573/743] Generating drivers/rte_bus_vdev_mingw with a custom command 00:28:13.388 [574/743] Generating drivers/rte_mempool_ring_def with a custom command 00:28:13.388 [575/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:28:13.388 [576/743] Generating drivers/rte_mempool_ring_mingw with a custom command 00:28:13.388 [577/743] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:28:13.388 [578/743] Linking static target lib/librte_node.a 00:28:13.388 [579/743] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:28:13.388 [580/743] Linking static target drivers/libtmp_rte_bus_vdev.a 00:28:13.388 [581/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:28:13.646 [582/743] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:28:13.646 [583/743] Linking target lib/librte_node.so.23.0 00:28:13.646 [584/743] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:28:13.646 [585/743] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:28:13.646 [586/743] Linking static target drivers/librte_bus_vdev.a 00:28:13.904 [587/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:28:13.904 [588/743] Linking static target drivers/libtmp_rte_bus_pci.a 00:28:13.904 [589/743] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:28:13.904 [590/743] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:28:13.904 [591/743] Linking target drivers/librte_bus_vdev.so.23.0 00:28:13.904 [592/743] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:28:13.904 [593/743] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:28:13.904 [594/743] Linking static target drivers/librte_bus_pci.a 00:28:14.162 [595/743] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:28:14.162 [596/743] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:28:14.162 [597/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:28:14.419 [598/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:28:14.419 [599/743] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:28:14.419 [600/743] Linking target drivers/librte_bus_pci.so.23.0 00:28:14.419 [601/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:28:14.419 [602/743] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:28:14.677 [603/743] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:28:14.677 [604/743] Linking static target drivers/libtmp_rte_mempool_ring.a 00:28:14.677 [605/743] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:28:14.677 [606/743] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:28:14.677 [607/743] Linking static target drivers/librte_mempool_ring.a 00:28:14.677 [608/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:28:14.677 [609/743] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:28:14.934 [610/743] Linking target drivers/librte_mempool_ring.so.23.0 00:28:15.191 [611/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:28:15.449 [612/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:28:15.707 [613/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:28:15.707 [614/743] Linking static target drivers/net/i40e/base/libi40e_base.a 00:28:15.965 [615/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:28:16.223 [616/743] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:28:16.223 [617/743] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:28:16.481 [618/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:28:16.739 [619/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:28:16.739 [620/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:28:16.997 [621/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:28:16.997 [622/743] Generating drivers/rte_net_i40e_def with a custom command 00:28:16.997 [623/743] Generating drivers/rte_net_i40e_mingw with a custom command 00:28:16.997 [624/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:28:17.255 [625/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:28:18.189 [626/743] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:28:18.189 [627/743] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:28:18.447 [628/743] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:28:18.447 [629/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:28:18.447 [630/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:28:18.447 [631/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:28:18.447 [632/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:28:18.705 [633/743] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:28:18.705 [634/743] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:28:18.963 [635/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:28:18.963 [636/743] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:28:19.221 [637/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:28:19.479 [638/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:28:19.479 [639/743] Linking static target drivers/libtmp_rte_net_i40e.a 00:28:19.737 [640/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:28:19.737 [641/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:28:19.737 [642/743] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:28:19.737 [643/743] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:28:19.995 [644/743] Linking static target drivers/librte_net_i40e.a 00:28:19.995 [645/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:28:19.995 [646/743] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:28:19.995 [647/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:28:20.253 [648/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:28:20.254 [649/743] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:28:20.254 [650/743] Linking static target lib/librte_vhost.a 00:28:20.512 [651/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:28:20.512 [652/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:28:20.512 [653/743] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:28:20.512 [654/743] Linking target drivers/librte_net_i40e.so.23.0 00:28:20.771 [655/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:28:20.771 [656/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:28:21.030 [657/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:28:21.288 [658/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:28:21.288 [659/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:28:21.288 [660/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:28:21.288 [661/743] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:28:21.547 [662/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:28:21.547 [663/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:28:21.547 [664/743] Linking target lib/librte_vhost.so.23.0 00:28:21.547 [665/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:28:21.547 [666/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:28:21.806 [667/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:28:21.806 [668/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:28:22.065 [669/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:28:22.065 [670/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:28:22.324 [671/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:28:22.324 [672/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:28:22.324 [673/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:28:22.892 [674/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:28:23.151 [675/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:28:23.151 [676/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:28:23.410 [677/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:28:23.410 [678/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:28:23.410 [679/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:28:23.669 [680/743] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:28:23.669 [681/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:28:24.049 [682/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:28:24.049 [683/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:28:24.049 [684/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:28:24.049 [685/743] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:28:24.049 [686/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:28:24.307 [687/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:28:24.307 [688/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:28:24.566 [689/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:28:24.566 [690/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:28:24.566 [691/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:28:24.566 [692/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:28:24.566 [693/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:28:24.824 [694/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:28:25.082 [695/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:28:25.082 [696/743] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:28:25.340 [697/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:28:25.599 [698/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:28:25.599 [699/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:28:26.165 [700/743] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:28:26.166 [701/743] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:28:26.166 [702/743] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:28:26.166 [703/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:28:26.166 [704/743] Linking static target lib/librte_pipeline.a 00:28:26.424 [705/743] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:28:26.424 [706/743] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:28:26.424 [707/743] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:28:26.682 [708/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:28:26.941 [709/743] Linking target app/dpdk-dumpcap 00:28:26.941 [710/743] Linking target app/dpdk-pdump 00:28:26.941 [711/743] Linking target app/dpdk-proc-info 00:28:26.941 [712/743] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:28:27.199 [713/743] Linking target app/dpdk-test-acl 00:28:27.199 [714/743] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:28:27.457 [715/743] Linking target app/dpdk-test-bbdev 00:28:27.457 [716/743] Linking target app/dpdk-test-compress-perf 00:28:27.457 [717/743] Linking target app/dpdk-test-cmdline 00:28:27.457 [718/743] Linking target app/dpdk-test-crypto-perf 00:28:27.716 [719/743] Linking target app/dpdk-test-eventdev 00:28:27.716 [720/743] Linking target app/dpdk-test-fib 00:28:27.716 [721/743] Linking target app/dpdk-test-flow-perf 00:28:27.974 [722/743] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:28:27.974 [723/743] Linking target app/dpdk-test-gpudev 00:28:27.974 [724/743] Linking target app/dpdk-test-pipeline 00:28:27.974 [725/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:28:28.232 [726/743] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:28:28.490 [727/743] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:28:28.490 [728/743] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:28:28.490 [729/743] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:28:28.748 [730/743] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:28:28.748 [731/743] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:28:28.748 [732/743] Linking target lib/librte_pipeline.so.23.0 00:28:29.006 [733/743] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:28:29.006 [734/743] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:28:29.263 [735/743] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:28:29.263 [736/743] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:28:29.263 [737/743] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:28:29.521 [738/743] Linking target app/dpdk-test-sad 00:28:29.521 [739/743] Linking target app/dpdk-test-regex 00:28:29.778 [740/743] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:28:30.037 [741/743] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:28:30.295 [742/743] Linking target app/dpdk-test-security-perf 00:28:30.295 [743/743] Linking target app/dpdk-testpmd 00:28:30.295 12:50:49 -- common/autobuild_common.sh@187 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:28:30.554 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:28:30.554 [0/1] Installing files. 00:28:30.815 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:28:30.815 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:28:30.815 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:28:30.815 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:28:30.815 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:28:30.815 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:28:30.815 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:28:30.815 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:28:30.815 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:28:30.815 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:28:30.815 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:28:30.815 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:28:30.815 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:28:30.815 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:28:30.815 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:28:30.815 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:28:30.815 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:28:30.815 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:28:30.815 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:28:30.815 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:28:30.815 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:28:30.815 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:28:30.815 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:28:30.815 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:28:30.815 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:28:30.815 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:28:30.815 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:28:30.815 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:28:30.815 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:28:30.815 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:28:30.815 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:28:30.815 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:28:30.815 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:28:30.815 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:28:30.815 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:28:30.815 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:28:30.815 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:28:30.815 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:28:30.815 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:28:30.816 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:28:30.816 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:28:30.816 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:28:30.816 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:28:30.816 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:28:30.816 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:28:30.816 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:28:30.816 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:28:30.816 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:28:30.816 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:28:30.816 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:28:30.816 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:28:30.816 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:28:30.816 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:28:30.816 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:28:30.816 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:28:30.816 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/flow_classify.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:28:30.816 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/ipv4_rules_file.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:28:30.816 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:28:30.816 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:28:30.816 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:28:30.816 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:28:30.816 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:28:30.816 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:28:30.816 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:28:30.816 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:28:30.816 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:28:30.816 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:28:30.816 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:28:30.816 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:28:30.816 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:28:30.816 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:28:30.816 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:28:30.816 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:28:30.816 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:28:30.816 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:28:30.816 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:28:30.816 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:28:30.816 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:28:30.816 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:28:30.816 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:28:30.816 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:28:30.816 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:28:30.816 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:28:30.816 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:28:30.816 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:28:30.816 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:28:30.816 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:28:30.816 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:28:30.816 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:28:30.816 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:28:30.816 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:28:30.816 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:28:30.816 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:28:30.816 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:28:30.816 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:28:30.816 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:28:30.816 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/kni.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:28:30.816 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:28:30.816 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:28:30.816 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:28:30.816 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:28:30.816 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:28:30.816 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:28:30.816 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:28:30.816 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:28:30.816 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:28:30.816 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:28:30.816 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:28:30.816 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:28:30.816 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:28:30.816 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:28:30.816 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:28:30.816 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:28:30.816 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:28:30.816 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:28:30.816 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:28:30.816 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:28:30.816 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:28:30.816 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:28:30.817 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:28:30.817 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:28:30.817 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:28:30.817 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:28:30.817 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:28:30.817 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:28:30.817 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:28:30.817 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:28:30.817 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:28:30.817 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:28:30.817 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:28:30.817 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:28:30.817 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:28:30.817 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:28:30.817 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:28:30.817 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:28:30.817 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:28:30.817 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:28:30.817 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:28:30.817 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:28:30.817 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:28:30.817 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:28:30.817 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:28:30.817 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:28:30.817 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:28:30.817 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:28:30.817 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:28:30.817 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:28:30.817 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:28:30.817 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:28:30.817 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:28:30.817 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:28:30.817 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:28:30.817 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:28:30.817 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:28:30.817 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:28:30.817 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:28:30.817 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:28:30.817 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:28:30.817 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:28:30.817 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:28:30.817 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:28:30.817 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:28:30.817 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:28:30.817 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:28:30.817 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:28:30.817 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:28:30.817 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:28:30.817 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:28:30.817 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:28:30.817 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:28:30.817 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:28:30.817 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:28:30.817 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:28:30.817 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:28:30.817 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:28:30.817 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:28:30.817 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:28:30.817 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:28:30.817 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:28:30.817 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:28:30.817 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:28:30.817 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:28:30.817 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:28:30.817 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:28:30.817 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:28:30.817 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:28:30.817 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:28:30.817 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:28:30.817 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:28:30.817 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:28:30.817 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:28:30.817 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:28:30.817 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:28:30.817 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:28:30.817 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:28:30.817 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:28:30.817 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:28:30.817 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:28:30.817 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:28:30.817 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:28:30.817 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:28:30.817 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:28:30.818 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:28:30.818 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:28:30.818 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:28:30.818 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:28:30.818 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:28:30.818 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:28:30.818 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:28:30.818 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:28:30.818 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:28:30.818 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:28:30.818 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:28:30.818 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:28:30.818 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:28:30.818 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:28:30.818 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:28:30.818 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:28:30.818 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:28:30.818 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:28:30.818 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:28:30.818 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:28:30.818 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:28:30.818 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:28:30.818 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:28:30.818 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:28:30.818 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:28:30.818 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:28:30.818 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:28:30.818 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:28:30.818 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:28:30.818 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:28:30.818 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:28:30.818 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:28:30.818 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:28:30.818 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:28:30.818 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:28:30.818 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:28:30.818 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:28:30.818 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:28:30.818 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:28:30.818 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:28:30.818 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:28:30.818 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:28:30.818 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:28:30.818 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:28:30.818 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:28:30.818 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:28:30.818 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:28:30.818 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:28:30.818 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:28:30.818 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:28:30.818 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:28:30.818 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:28:30.818 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:28:30.818 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:28:30.818 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:28:30.818 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:28:30.818 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:28:30.818 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:28:30.818 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:28:30.818 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:28:30.818 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:28:30.818 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:28:30.818 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:28:30.818 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:28:30.818 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:28:30.818 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:28:30.818 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:28:30.818 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:28:30.818 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:28:30.818 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:28:30.818 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:28:30.818 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:28:30.818 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:28:30.818 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:28:30.818 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:28:30.818 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:28:30.818 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:28:30.818 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:28:30.818 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:28:30.818 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:28:30.818 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:28:30.819 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:28:30.819 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:28:30.819 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:28:30.819 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:28:30.819 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:28:30.819 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:28:30.819 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:28:30.819 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:28:30.819 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:28:30.819 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:28:30.819 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:28:30.819 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:28:30.819 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:28:30.819 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:28:30.819 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:28:30.819 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:28:30.819 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:28:30.819 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:28:30.819 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:28:30.819 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:28:30.819 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:28:30.819 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:28:30.819 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:28:30.819 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:28:30.819 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:28:30.819 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:28:30.819 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:28:30.819 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:28:30.819 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:28:30.819 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:28:30.819 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:28:30.819 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:28:30.819 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:28:30.819 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:28:30.819 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:28:30.819 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:28:30.819 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:28:30.819 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:28:30.819 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:28:30.819 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:28:30.819 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:28:30.819 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:28:30.819 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:28:30.819 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:28:30.819 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:28:30.819 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:28:30.819 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:28:30.819 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:28:30.819 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:28:30.819 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:28:30.819 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:28:30.819 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:28:30.819 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:28:30.819 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:28:30.819 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:28:30.819 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:28:30.819 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:28:30.819 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:28:30.819 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:28:30.819 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:28:30.819 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:28:30.819 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:28:30.819 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:28:30.819 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:28:30.819 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:28:30.819 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:28:30.819 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:28:30.819 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:28:30.819 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:28:30.819 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:28:30.819 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:28:30.819 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:28:30.819 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:28:30.819 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:28:30.819 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:28:30.819 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:28:30.819 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:28:30.819 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:28:30.819 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:28:30.819 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:28:30.819 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:28:30.819 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:28:30.819 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:28:30.819 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:28:30.820 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:28:30.820 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:28:30.820 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:28:30.820 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:28:30.820 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:28:30.820 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:28:30.820 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:28:30.820 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:28:30.820 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:28:30.820 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:28:30.820 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:28:30.820 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:30.820 Installing lib/librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:30.820 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:30.820 Installing lib/librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:30.820 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:30.820 Installing lib/librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:30.820 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:30.820 Installing lib/librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:30.820 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:30.820 Installing lib/librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:30.820 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:30.820 Installing lib/librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:30.820 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:30.820 Installing lib/librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:30.820 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:31.081 Installing lib/librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:31.081 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:31.081 Installing lib/librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:31.081 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:31.081 Installing lib/librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:31.081 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:31.081 Installing lib/librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:31.081 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:31.081 Installing lib/librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:31.081 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:31.081 Installing lib/librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:31.081 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:31.081 Installing lib/librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:31.081 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:31.081 Installing lib/librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:31.081 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:31.081 Installing lib/librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:31.081 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:31.081 Installing lib/librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:31.081 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:31.081 Installing lib/librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:31.081 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:31.081 Installing lib/librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:31.081 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:31.081 Installing lib/librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:31.081 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:31.081 Installing lib/librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:31.081 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:31.081 Installing lib/librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:31.081 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:31.081 Installing lib/librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:31.081 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:31.081 Installing lib/librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:31.081 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:31.081 Installing lib/librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:31.081 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:31.081 Installing lib/librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:31.081 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:31.081 Installing lib/librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:31.081 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:31.081 Installing lib/librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:31.081 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:31.081 Installing lib/librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:31.081 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:31.081 Installing lib/librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:31.081 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:31.081 Installing lib/librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:31.081 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:31.081 Installing lib/librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:31.081 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:31.081 Installing lib/librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:31.081 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:31.081 Installing lib/librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:31.081 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:31.081 Installing lib/librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:31.081 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:31.081 Installing lib/librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:31.081 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:31.081 Installing lib/librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:31.081 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:31.081 Installing lib/librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:31.081 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:31.081 Installing lib/librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:31.081 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:31.081 Installing lib/librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:31.081 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:31.081 Installing lib/librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:31.081 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:31.081 Installing lib/librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:31.081 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:31.081 Installing lib/librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:31.081 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:31.081 Installing lib/librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:31.081 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:31.081 Installing lib/librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:31.081 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:31.081 Installing lib/librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:31.081 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:31.081 Installing lib/librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:31.081 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:31.081 Installing lib/librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:31.081 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:31.081 Installing lib/librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:31.081 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:31.081 Installing lib/librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:31.081 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:31.081 Installing lib/librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:31.081 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:31.081 Installing lib/librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:31.081 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:31.081 Installing drivers/librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:28:31.081 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:31.081 Installing drivers/librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:28:31.081 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:31.081 Installing drivers/librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:28:31.081 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:31.089 Installing drivers/librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:28:31.089 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:28:31.089 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:28:31.089 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:28:31.089 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:28:31.089 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:28:31.089 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:28:31.089 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:28:31.089 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:28:31.089 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:28:31.089 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:28:31.089 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:28:31.089 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:28:31.089 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:28:31.089 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:28:31.089 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:28:31.089 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:28:31.089 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:28:31.089 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.089 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.089 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.089 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:28:31.089 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:28:31.089 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:28:31.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:28:31.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:28:31.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:28:31.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:28:31.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:28:31.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:28:31.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:28:31.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:28:31.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:28:31.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.090 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.090 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.090 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.090 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.090 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.090 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.090 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.090 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.090 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.090 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.090 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.090 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.090 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.090 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.090 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.090 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.090 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.090 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.090 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.090 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.090 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.090 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.090 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.090 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.090 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.090 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.090 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.091 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.091 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.091 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.091 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.091 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.091 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.091 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.091 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.091 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.091 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.091 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.091 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.091 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.091 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.091 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.091 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.091 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.091 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.091 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.091 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.091 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.091 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.091 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.091 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.091 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.091 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.091 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.091 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.091 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.091 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.091 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.091 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.091 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.091 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.091 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.091 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.091 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.091 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.091 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.091 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.091 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.091 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.091 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.091 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.091 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.091 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.091 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.091 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.091 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.091 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.091 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.091 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.091 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.091 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.091 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.091 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.091 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.091 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.091 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.091 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.091 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.091 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.091 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.091 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.091 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.091 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.091 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.091 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.091 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.091 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.091 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.091 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.091 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.091 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.091 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.091 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.091 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.091 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.091 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.091 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.091 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.091 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.091 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.091 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.091 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.091 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.091 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.091 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.091 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.091 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.091 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.091 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.091 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.091 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.091 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.091 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.091 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.091 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.091 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.091 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.091 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_empty_poll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.091 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_intel_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.091 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.091 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.091 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.091 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.092 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.092 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.092 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.092 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.092 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.092 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.092 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.092 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.092 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.092 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.092 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.092 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.092 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.092 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.092 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.092 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.092 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.092 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.092 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.092 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.092 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.092 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.092 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.092 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.092 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.092 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.092 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.092 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.092 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.092 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.092 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.092 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.092 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.092 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.092 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.092 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.092 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.092 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.092 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.092 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.092 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.092 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.092 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.092 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.092 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.092 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.092 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.092 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.092 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.092 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.092 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.092 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.092 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.092 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.092 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.092 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.092 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.092 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.092 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.092 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.092 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.092 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.092 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.092 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.092 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.092 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.092 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.092 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.092 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.092 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.092 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.092 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.092 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.092 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.092 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.092 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.092 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.092 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.092 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.092 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:28:31.092 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:28:31.092 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:28:31.092 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:28:31.092 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:31.092 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:28:31.092 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:28:31.092 Installing symlink pointing to librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.23 00:28:31.092 Installing symlink pointing to librte_kvargs.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:28:31.092 Installing symlink pointing to librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.23 00:28:31.092 Installing symlink pointing to librte_telemetry.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:28:31.092 Installing symlink pointing to librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.23 00:28:31.092 Installing symlink pointing to librte_eal.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:28:31.092 Installing symlink pointing to librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.23 00:28:31.092 Installing symlink pointing to librte_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:28:31.092 Installing symlink pointing to librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.23 00:28:31.092 Installing symlink pointing to librte_rcu.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:28:31.092 Installing symlink pointing to librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.23 00:28:31.092 Installing symlink pointing to librte_mempool.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:28:31.092 Installing symlink pointing to librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.23 00:28:31.092 Installing symlink pointing to librte_mbuf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:28:31.092 Installing symlink pointing to librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.23 00:28:31.092 Installing symlink pointing to librte_net.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:28:31.092 Installing symlink pointing to librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.23 00:28:31.092 Installing symlink pointing to librte_meter.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:28:31.092 Installing symlink pointing to librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.23 00:28:31.092 Installing symlink pointing to librte_ethdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:28:31.093 Installing symlink pointing to librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.23 00:28:31.093 Installing symlink pointing to librte_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:28:31.093 Installing symlink pointing to librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.23 00:28:31.093 Installing symlink pointing to librte_cmdline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:28:31.093 Installing symlink pointing to librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.23 00:28:31.093 Installing symlink pointing to librte_metrics.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:28:31.093 Installing symlink pointing to librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.23 00:28:31.093 Installing symlink pointing to librte_hash.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:28:31.093 Installing symlink pointing to librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.23 00:28:31.093 Installing symlink pointing to librte_timer.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:28:31.093 Installing symlink pointing to librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.23 00:28:31.093 Installing symlink pointing to librte_acl.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:28:31.093 Installing symlink pointing to librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.23 00:28:31.093 Installing symlink pointing to librte_bbdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:28:31.093 Installing symlink pointing to librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.23 00:28:31.093 Installing symlink pointing to librte_bitratestats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:28:31.093 Installing symlink pointing to librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.23 00:28:31.093 Installing symlink pointing to librte_bpf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:28:31.093 Installing symlink pointing to librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.23 00:28:31.093 Installing symlink pointing to librte_cfgfile.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:28:31.093 Installing symlink pointing to librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.23 00:28:31.093 Installing symlink pointing to librte_compressdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:28:31.093 Installing symlink pointing to librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.23 00:28:31.093 Installing symlink pointing to librte_cryptodev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:28:31.093 Installing symlink pointing to librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.23 00:28:31.093 Installing symlink pointing to librte_distributor.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:28:31.093 Installing symlink pointing to librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.23 00:28:31.093 Installing symlink pointing to librte_efd.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:28:31.093 Installing symlink pointing to librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.23 00:28:31.093 Installing symlink pointing to librte_eventdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:28:31.093 Installing symlink pointing to librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.23 00:28:31.352 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:28:31.352 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:28:31.352 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:28:31.352 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:28:31.352 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:28:31.352 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:28:31.352 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:28:31.352 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:28:31.352 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:28:31.352 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:28:31.352 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:28:31.352 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:28:31.352 Installing symlink pointing to librte_gpudev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:28:31.352 Installing symlink pointing to librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.23 00:28:31.352 Installing symlink pointing to librte_gro.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:28:31.352 Installing symlink pointing to librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.23 00:28:31.352 Installing symlink pointing to librte_gso.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:28:31.352 Installing symlink pointing to librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.23 00:28:31.352 Installing symlink pointing to librte_ip_frag.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:28:31.352 Installing symlink pointing to librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.23 00:28:31.352 Installing symlink pointing to librte_jobstats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:28:31.352 Installing symlink pointing to librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.23 00:28:31.352 Installing symlink pointing to librte_latencystats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:28:31.352 Installing symlink pointing to librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.23 00:28:31.352 Installing symlink pointing to librte_lpm.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:28:31.352 Installing symlink pointing to librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.23 00:28:31.352 Installing symlink pointing to librte_member.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:28:31.352 Installing symlink pointing to librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.23 00:28:31.352 Installing symlink pointing to librte_pcapng.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:28:31.352 Installing symlink pointing to librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.23 00:28:31.352 Installing symlink pointing to librte_power.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:28:31.352 Installing symlink pointing to librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.23 00:28:31.352 Installing symlink pointing to librte_rawdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:28:31.352 Installing symlink pointing to librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.23 00:28:31.352 Installing symlink pointing to librte_regexdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:28:31.352 Installing symlink pointing to librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.23 00:28:31.352 Installing symlink pointing to librte_dmadev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:28:31.352 Installing symlink pointing to librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.23 00:28:31.352 Installing symlink pointing to librte_rib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:28:31.352 Installing symlink pointing to librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.23 00:28:31.352 Installing symlink pointing to librte_reorder.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:28:31.352 Installing symlink pointing to librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.23 00:28:31.352 Installing symlink pointing to librte_sched.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:28:31.352 Installing symlink pointing to librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.23 00:28:31.352 Installing symlink pointing to librte_security.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:28:31.352 Installing symlink pointing to librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.23 00:28:31.352 Installing symlink pointing to librte_stack.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:28:31.352 Installing symlink pointing to librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.23 00:28:31.352 Installing symlink pointing to librte_vhost.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:28:31.352 Installing symlink pointing to librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.23 00:28:31.352 Installing symlink pointing to librte_ipsec.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:28:31.352 Installing symlink pointing to librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.23 00:28:31.352 Installing symlink pointing to librte_fib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:28:31.352 Installing symlink pointing to librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.23 00:28:31.352 Installing symlink pointing to librte_port.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:28:31.352 Installing symlink pointing to librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.23 00:28:31.352 Installing symlink pointing to librte_pdump.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:28:31.352 Installing symlink pointing to librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.23 00:28:31.352 Installing symlink pointing to librte_table.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:28:31.352 Installing symlink pointing to librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.23 00:28:31.352 Installing symlink pointing to librte_pipeline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:28:31.352 Installing symlink pointing to librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.23 00:28:31.352 Installing symlink pointing to librte_graph.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:28:31.353 Installing symlink pointing to librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.23 00:28:31.353 Installing symlink pointing to librte_node.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:28:31.353 Installing symlink pointing to librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:28:31.353 Installing symlink pointing to librte_bus_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:28:31.353 Installing symlink pointing to librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:28:31.353 Installing symlink pointing to librte_bus_vdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:28:31.353 Installing symlink pointing to librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:28:31.353 Installing symlink pointing to librte_mempool_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:28:31.353 Installing symlink pointing to librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:28:31.353 Installing symlink pointing to librte_net_i40e.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:28:31.353 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:28:31.353 12:50:50 -- common/autobuild_common.sh@189 -- $ uname -s 00:28:31.353 12:50:50 -- common/autobuild_common.sh@189 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:28:31.353 12:50:50 -- common/autobuild_common.sh@200 -- $ cat 00:28:31.353 ************************************ 00:28:31.353 END TEST build_native_dpdk 00:28:31.353 ************************************ 00:28:31.353 12:50:50 -- common/autobuild_common.sh@205 -- $ cd /home/vagrant/spdk_repo/spdk 00:28:31.353 00:28:31.353 real 0m50.420s 00:28:31.353 user 5m57.524s 00:28:31.353 sys 0m57.143s 00:28:31.353 12:50:50 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:28:31.353 12:50:50 -- common/autotest_common.sh@10 -- $ set +x 00:28:31.353 12:50:50 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:28:31.353 12:50:50 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:28:31.353 12:50:50 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:28:31.353 12:50:50 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:28:31.353 12:50:50 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:28:31.353 12:50:50 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:28:31.353 12:50:50 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:28:31.353 12:50:50 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-avahi --with-golang --with-shared 00:28:31.353 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:28:31.611 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:28:31.611 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:28:31.611 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:28:31.870 Using 'verbs' RDMA provider 00:28:47.312 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:28:59.522 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:28:59.522 go version go1.21.1 linux/amd64 00:28:59.522 Creating mk/config.mk...done. 00:28:59.523 Creating mk/cc.flags.mk...done. 00:28:59.523 Type 'make' to build. 00:28:59.523 12:51:18 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:28:59.523 12:51:18 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:28:59.523 12:51:18 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:28:59.523 12:51:18 -- common/autotest_common.sh@10 -- $ set +x 00:28:59.523 ************************************ 00:28:59.523 START TEST make 00:28:59.523 ************************************ 00:28:59.523 12:51:18 -- common/autotest_common.sh@1104 -- $ make -j10 00:28:59.523 make[1]: Nothing to be done for 'all'. 00:29:26.318 CC lib/ut_mock/mock.o 00:29:26.318 CC lib/log/log.o 00:29:26.318 CC lib/log/log_flags.o 00:29:26.318 CC lib/log/log_deprecated.o 00:29:26.318 CC lib/ut/ut.o 00:29:26.318 LIB libspdk_ut_mock.a 00:29:26.318 LIB libspdk_ut.a 00:29:26.318 LIB libspdk_log.a 00:29:26.318 SO libspdk_ut_mock.so.5.0 00:29:26.318 SO libspdk_ut.so.1.0 00:29:26.318 SO libspdk_log.so.6.1 00:29:26.318 SYMLINK libspdk_ut_mock.so 00:29:26.318 SYMLINK libspdk_ut.so 00:29:26.318 SYMLINK libspdk_log.so 00:29:26.318 CC lib/dma/dma.o 00:29:26.319 CC lib/util/base64.o 00:29:26.319 CC lib/util/cpuset.o 00:29:26.319 CXX lib/trace_parser/trace.o 00:29:26.319 CC lib/util/crc16.o 00:29:26.319 CC lib/util/bit_array.o 00:29:26.319 CC lib/util/crc32c.o 00:29:26.319 CC lib/util/crc32.o 00:29:26.319 CC lib/ioat/ioat.o 00:29:26.319 CC lib/vfio_user/host/vfio_user_pci.o 00:29:26.319 CC lib/util/crc32_ieee.o 00:29:26.319 CC lib/util/crc64.o 00:29:26.319 CC lib/util/dif.o 00:29:26.319 CC lib/util/fd.o 00:29:26.319 LIB libspdk_dma.a 00:29:26.319 CC lib/util/file.o 00:29:26.319 CC lib/util/hexlify.o 00:29:26.319 SO libspdk_dma.so.3.0 00:29:26.319 CC lib/util/iov.o 00:29:26.319 SYMLINK libspdk_dma.so 00:29:26.319 CC lib/vfio_user/host/vfio_user.o 00:29:26.319 CC lib/util/math.o 00:29:26.319 LIB libspdk_ioat.a 00:29:26.319 CC lib/util/pipe.o 00:29:26.319 SO libspdk_ioat.so.6.0 00:29:26.319 CC lib/util/strerror_tls.o 00:29:26.319 CC lib/util/string.o 00:29:26.319 SYMLINK libspdk_ioat.so 00:29:26.319 CC lib/util/uuid.o 00:29:26.319 CC lib/util/fd_group.o 00:29:26.319 CC lib/util/xor.o 00:29:26.319 CC lib/util/zipf.o 00:29:26.319 LIB libspdk_vfio_user.a 00:29:26.319 SO libspdk_vfio_user.so.4.0 00:29:26.319 SYMLINK libspdk_vfio_user.so 00:29:26.319 LIB libspdk_util.a 00:29:26.319 SO libspdk_util.so.8.0 00:29:26.319 SYMLINK libspdk_util.so 00:29:26.319 LIB libspdk_trace_parser.a 00:29:26.319 SO libspdk_trace_parser.so.4.0 00:29:26.319 CC lib/conf/conf.o 00:29:26.319 CC lib/rdma/common.o 00:29:26.319 CC lib/rdma/rdma_verbs.o 00:29:26.319 CC lib/json/json_parse.o 00:29:26.319 CC lib/json/json_util.o 00:29:26.319 CC lib/env_dpdk/env.o 00:29:26.319 CC lib/json/json_write.o 00:29:26.319 CC lib/vmd/vmd.o 00:29:26.319 CC lib/idxd/idxd.o 00:29:26.319 SYMLINK libspdk_trace_parser.so 00:29:26.319 CC lib/idxd/idxd_user.o 00:29:26.319 CC lib/idxd/idxd_kernel.o 00:29:26.319 LIB libspdk_conf.a 00:29:26.319 CC lib/vmd/led.o 00:29:26.319 CC lib/env_dpdk/memory.o 00:29:26.319 SO libspdk_conf.so.5.0 00:29:26.319 LIB libspdk_rdma.a 00:29:26.319 CC lib/env_dpdk/pci.o 00:29:26.319 LIB libspdk_json.a 00:29:26.319 SO libspdk_rdma.so.5.0 00:29:26.319 SYMLINK libspdk_conf.so 00:29:26.319 CC lib/env_dpdk/init.o 00:29:26.319 SO libspdk_json.so.5.1 00:29:26.319 SYMLINK libspdk_rdma.so 00:29:26.319 CC lib/env_dpdk/threads.o 00:29:26.319 CC lib/env_dpdk/pci_ioat.o 00:29:26.319 CC lib/env_dpdk/pci_virtio.o 00:29:26.319 SYMLINK libspdk_json.so 00:29:26.319 CC lib/env_dpdk/pci_vmd.o 00:29:26.319 CC lib/env_dpdk/pci_idxd.o 00:29:26.319 CC lib/env_dpdk/pci_event.o 00:29:26.319 CC lib/env_dpdk/sigbus_handler.o 00:29:26.319 LIB libspdk_idxd.a 00:29:26.319 LIB libspdk_vmd.a 00:29:26.319 CC lib/env_dpdk/pci_dpdk.o 00:29:26.319 CC lib/env_dpdk/pci_dpdk_2207.o 00:29:26.319 SO libspdk_idxd.so.11.0 00:29:26.319 SO libspdk_vmd.so.5.0 00:29:26.319 CC lib/jsonrpc/jsonrpc_server.o 00:29:26.319 CC lib/env_dpdk/pci_dpdk_2211.o 00:29:26.319 SYMLINK libspdk_idxd.so 00:29:26.319 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:29:26.319 CC lib/jsonrpc/jsonrpc_client.o 00:29:26.319 SYMLINK libspdk_vmd.so 00:29:26.319 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:29:26.319 LIB libspdk_jsonrpc.a 00:29:26.319 SO libspdk_jsonrpc.so.5.1 00:29:26.319 SYMLINK libspdk_jsonrpc.so 00:29:26.319 CC lib/rpc/rpc.o 00:29:26.319 LIB libspdk_env_dpdk.a 00:29:26.319 SO libspdk_env_dpdk.so.13.0 00:29:26.319 LIB libspdk_rpc.a 00:29:26.319 SO libspdk_rpc.so.5.0 00:29:26.319 SYMLINK libspdk_rpc.so 00:29:26.319 SYMLINK libspdk_env_dpdk.so 00:29:26.319 CC lib/sock/sock_rpc.o 00:29:26.319 CC lib/sock/sock.o 00:29:26.319 CC lib/notify/notify.o 00:29:26.319 CC lib/trace/trace_flags.o 00:29:26.319 CC lib/trace/trace.o 00:29:26.319 CC lib/notify/notify_rpc.o 00:29:26.319 CC lib/trace/trace_rpc.o 00:29:26.319 LIB libspdk_notify.a 00:29:26.578 SO libspdk_notify.so.5.0 00:29:26.578 SYMLINK libspdk_notify.so 00:29:26.578 LIB libspdk_trace.a 00:29:26.578 SO libspdk_trace.so.9.0 00:29:26.578 LIB libspdk_sock.a 00:29:26.578 SYMLINK libspdk_trace.so 00:29:26.578 SO libspdk_sock.so.8.0 00:29:26.836 SYMLINK libspdk_sock.so 00:29:26.836 CC lib/thread/thread.o 00:29:26.836 CC lib/thread/iobuf.o 00:29:26.836 CC lib/nvme/nvme_ctrlr.o 00:29:26.836 CC lib/nvme/nvme_ctrlr_cmd.o 00:29:26.836 CC lib/nvme/nvme_fabric.o 00:29:26.836 CC lib/nvme/nvme_ns_cmd.o 00:29:26.836 CC lib/nvme/nvme_ns.o 00:29:26.836 CC lib/nvme/nvme_pcie_common.o 00:29:26.836 CC lib/nvme/nvme_qpair.o 00:29:26.836 CC lib/nvme/nvme_pcie.o 00:29:27.404 CC lib/nvme/nvme.o 00:29:27.663 CC lib/nvme/nvme_quirks.o 00:29:27.663 CC lib/nvme/nvme_transport.o 00:29:27.663 CC lib/nvme/nvme_discovery.o 00:29:27.922 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:29:27.922 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:29:27.922 CC lib/nvme/nvme_tcp.o 00:29:28.180 CC lib/nvme/nvme_opal.o 00:29:28.180 CC lib/nvme/nvme_io_msg.o 00:29:28.180 CC lib/nvme/nvme_poll_group.o 00:29:28.439 LIB libspdk_thread.a 00:29:28.439 CC lib/nvme/nvme_zns.o 00:29:28.439 SO libspdk_thread.so.9.0 00:29:28.439 CC lib/nvme/nvme_cuse.o 00:29:28.439 SYMLINK libspdk_thread.so 00:29:28.439 CC lib/nvme/nvme_vfio_user.o 00:29:28.439 CC lib/nvme/nvme_rdma.o 00:29:28.697 CC lib/accel/accel.o 00:29:28.697 CC lib/blob/blobstore.o 00:29:28.697 CC lib/blob/request.o 00:29:28.955 CC lib/blob/zeroes.o 00:29:28.955 CC lib/blob/blob_bs_dev.o 00:29:29.214 CC lib/accel/accel_rpc.o 00:29:29.214 CC lib/accel/accel_sw.o 00:29:29.214 CC lib/init/json_config.o 00:29:29.214 CC lib/init/subsystem.o 00:29:29.214 CC lib/virtio/virtio.o 00:29:29.214 CC lib/virtio/virtio_vhost_user.o 00:29:29.214 CC lib/virtio/virtio_vfio_user.o 00:29:29.472 CC lib/init/subsystem_rpc.o 00:29:29.472 CC lib/init/rpc.o 00:29:29.472 CC lib/virtio/virtio_pci.o 00:29:29.472 LIB libspdk_init.a 00:29:29.472 SO libspdk_init.so.4.0 00:29:29.731 LIB libspdk_accel.a 00:29:29.731 SYMLINK libspdk_init.so 00:29:29.731 SO libspdk_accel.so.14.0 00:29:29.731 LIB libspdk_virtio.a 00:29:29.731 SYMLINK libspdk_accel.so 00:29:29.731 SO libspdk_virtio.so.6.0 00:29:29.731 CC lib/event/reactor.o 00:29:29.731 CC lib/event/app.o 00:29:29.731 CC lib/event/log_rpc.o 00:29:29.731 CC lib/event/app_rpc.o 00:29:29.731 CC lib/event/scheduler_static.o 00:29:29.731 SYMLINK libspdk_virtio.so 00:29:29.731 LIB libspdk_nvme.a 00:29:29.989 CC lib/bdev/bdev.o 00:29:29.989 CC lib/bdev/bdev_zone.o 00:29:29.989 CC lib/bdev/bdev_rpc.o 00:29:29.989 CC lib/bdev/part.o 00:29:29.989 CC lib/bdev/scsi_nvme.o 00:29:29.989 SO libspdk_nvme.so.12.0 00:29:30.248 LIB libspdk_event.a 00:29:30.248 SO libspdk_event.so.12.0 00:29:30.248 SYMLINK libspdk_nvme.so 00:29:30.248 SYMLINK libspdk_event.so 00:29:31.622 LIB libspdk_blob.a 00:29:31.622 SO libspdk_blob.so.10.1 00:29:31.622 SYMLINK libspdk_blob.so 00:29:31.622 CC lib/lvol/lvol.o 00:29:31.622 CC lib/blobfs/blobfs.o 00:29:31.622 CC lib/blobfs/tree.o 00:29:32.558 LIB libspdk_bdev.a 00:29:32.558 LIB libspdk_blobfs.a 00:29:32.558 SO libspdk_bdev.so.14.0 00:29:32.558 SO libspdk_blobfs.so.9.0 00:29:32.558 LIB libspdk_lvol.a 00:29:32.558 SO libspdk_lvol.so.9.1 00:29:32.558 SYMLINK libspdk_blobfs.so 00:29:32.558 SYMLINK libspdk_bdev.so 00:29:32.816 SYMLINK libspdk_lvol.so 00:29:32.816 CC lib/nvmf/ctrlr.o 00:29:32.816 CC lib/nbd/nbd.o 00:29:32.816 CC lib/nvmf/ctrlr_discovery.o 00:29:32.816 CC lib/nbd/nbd_rpc.o 00:29:32.816 CC lib/nvmf/ctrlr_bdev.o 00:29:32.816 CC lib/nvmf/subsystem.o 00:29:32.816 CC lib/nvmf/nvmf.o 00:29:32.816 CC lib/ublk/ublk.o 00:29:32.816 CC lib/scsi/dev.o 00:29:32.816 CC lib/ftl/ftl_core.o 00:29:33.074 CC lib/ublk/ublk_rpc.o 00:29:33.074 CC lib/scsi/lun.o 00:29:33.074 CC lib/scsi/port.o 00:29:33.332 LIB libspdk_nbd.a 00:29:33.332 CC lib/ftl/ftl_init.o 00:29:33.332 SO libspdk_nbd.so.6.0 00:29:33.332 SYMLINK libspdk_nbd.so 00:29:33.332 CC lib/ftl/ftl_layout.o 00:29:33.332 CC lib/ftl/ftl_debug.o 00:29:33.332 CC lib/nvmf/nvmf_rpc.o 00:29:33.332 CC lib/scsi/scsi.o 00:29:33.592 LIB libspdk_ublk.a 00:29:33.592 CC lib/scsi/scsi_bdev.o 00:29:33.592 CC lib/nvmf/transport.o 00:29:33.592 SO libspdk_ublk.so.2.0 00:29:33.592 CC lib/nvmf/tcp.o 00:29:33.592 CC lib/ftl/ftl_io.o 00:29:33.592 SYMLINK libspdk_ublk.so 00:29:33.592 CC lib/ftl/ftl_sb.o 00:29:33.592 CC lib/ftl/ftl_l2p.o 00:29:33.851 CC lib/nvmf/rdma.o 00:29:33.851 CC lib/ftl/ftl_l2p_flat.o 00:29:33.851 CC lib/scsi/scsi_pr.o 00:29:33.851 CC lib/ftl/ftl_nv_cache.o 00:29:34.109 CC lib/scsi/scsi_rpc.o 00:29:34.109 CC lib/scsi/task.o 00:29:34.109 CC lib/ftl/ftl_band.o 00:29:34.109 CC lib/ftl/ftl_band_ops.o 00:29:34.109 CC lib/ftl/ftl_writer.o 00:29:34.109 CC lib/ftl/ftl_rq.o 00:29:34.109 CC lib/ftl/ftl_reloc.o 00:29:34.366 LIB libspdk_scsi.a 00:29:34.366 SO libspdk_scsi.so.8.0 00:29:34.366 CC lib/ftl/ftl_l2p_cache.o 00:29:34.366 CC lib/ftl/ftl_p2l.o 00:29:34.366 SYMLINK libspdk_scsi.so 00:29:34.366 CC lib/ftl/mngt/ftl_mngt.o 00:29:34.366 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:29:34.624 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:29:34.624 CC lib/iscsi/conn.o 00:29:34.624 CC lib/ftl/mngt/ftl_mngt_startup.o 00:29:34.624 CC lib/ftl/mngt/ftl_mngt_md.o 00:29:34.624 CC lib/ftl/mngt/ftl_mngt_misc.o 00:29:34.882 CC lib/iscsi/init_grp.o 00:29:34.882 CC lib/iscsi/iscsi.o 00:29:34.882 CC lib/iscsi/md5.o 00:29:34.882 CC lib/iscsi/param.o 00:29:34.882 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:29:34.882 CC lib/iscsi/portal_grp.o 00:29:35.141 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:29:35.141 CC lib/iscsi/tgt_node.o 00:29:35.141 CC lib/iscsi/iscsi_subsystem.o 00:29:35.141 CC lib/iscsi/iscsi_rpc.o 00:29:35.141 CC lib/iscsi/task.o 00:29:35.141 CC lib/ftl/mngt/ftl_mngt_band.o 00:29:35.141 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:29:35.400 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:29:35.400 CC lib/vhost/vhost.o 00:29:35.400 CC lib/vhost/vhost_rpc.o 00:29:35.400 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:29:35.400 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:29:35.400 CC lib/ftl/utils/ftl_conf.o 00:29:35.400 CC lib/ftl/utils/ftl_md.o 00:29:35.400 CC lib/vhost/vhost_scsi.o 00:29:35.659 CC lib/ftl/utils/ftl_mempool.o 00:29:35.659 CC lib/ftl/utils/ftl_bitmap.o 00:29:35.659 CC lib/vhost/vhost_blk.o 00:29:35.659 CC lib/vhost/rte_vhost_user.o 00:29:35.659 LIB libspdk_nvmf.a 00:29:35.917 CC lib/ftl/utils/ftl_property.o 00:29:35.917 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:29:35.917 SO libspdk_nvmf.so.17.0 00:29:35.917 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:29:35.917 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:29:36.176 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:29:36.176 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:29:36.177 SYMLINK libspdk_nvmf.so 00:29:36.177 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:29:36.177 LIB libspdk_iscsi.a 00:29:36.177 CC lib/ftl/upgrade/ftl_sb_v3.o 00:29:36.177 CC lib/ftl/upgrade/ftl_sb_v5.o 00:29:36.177 SO libspdk_iscsi.so.7.0 00:29:36.177 CC lib/ftl/nvc/ftl_nvc_dev.o 00:29:36.177 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:29:36.177 CC lib/ftl/base/ftl_base_dev.o 00:29:36.177 CC lib/ftl/base/ftl_base_bdev.o 00:29:36.435 CC lib/ftl/ftl_trace.o 00:29:36.435 SYMLINK libspdk_iscsi.so 00:29:36.694 LIB libspdk_ftl.a 00:29:36.694 SO libspdk_ftl.so.8.0 00:29:36.694 LIB libspdk_vhost.a 00:29:36.952 SO libspdk_vhost.so.7.1 00:29:36.952 SYMLINK libspdk_vhost.so 00:29:37.210 SYMLINK libspdk_ftl.so 00:29:37.468 CC module/env_dpdk/env_dpdk_rpc.o 00:29:37.468 CC module/accel/error/accel_error.o 00:29:37.468 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:29:37.468 CC module/accel/dsa/accel_dsa.o 00:29:37.468 CC module/scheduler/gscheduler/gscheduler.o 00:29:37.468 CC module/accel/ioat/accel_ioat.o 00:29:37.468 CC module/accel/iaa/accel_iaa.o 00:29:37.468 CC module/sock/posix/posix.o 00:29:37.468 CC module/scheduler/dynamic/scheduler_dynamic.o 00:29:37.468 CC module/blob/bdev/blob_bdev.o 00:29:37.468 LIB libspdk_env_dpdk_rpc.a 00:29:37.468 SO libspdk_env_dpdk_rpc.so.5.0 00:29:37.468 LIB libspdk_scheduler_gscheduler.a 00:29:37.727 SO libspdk_scheduler_gscheduler.so.3.0 00:29:37.727 LIB libspdk_scheduler_dpdk_governor.a 00:29:37.727 CC module/accel/ioat/accel_ioat_rpc.o 00:29:37.727 CC module/accel/error/accel_error_rpc.o 00:29:37.727 SYMLINK libspdk_env_dpdk_rpc.so 00:29:37.727 CC module/accel/dsa/accel_dsa_rpc.o 00:29:37.727 CC module/accel/iaa/accel_iaa_rpc.o 00:29:37.727 LIB libspdk_scheduler_dynamic.a 00:29:37.727 SO libspdk_scheduler_dpdk_governor.so.3.0 00:29:37.727 SYMLINK libspdk_scheduler_gscheduler.so 00:29:37.727 SO libspdk_scheduler_dynamic.so.3.0 00:29:37.727 SYMLINK libspdk_scheduler_dpdk_governor.so 00:29:37.727 LIB libspdk_blob_bdev.a 00:29:37.727 SYMLINK libspdk_scheduler_dynamic.so 00:29:37.727 LIB libspdk_accel_ioat.a 00:29:37.727 SO libspdk_blob_bdev.so.10.1 00:29:37.727 LIB libspdk_accel_error.a 00:29:37.727 LIB libspdk_accel_dsa.a 00:29:37.727 LIB libspdk_accel_iaa.a 00:29:37.727 SO libspdk_accel_ioat.so.5.0 00:29:37.727 SYMLINK libspdk_blob_bdev.so 00:29:37.727 SO libspdk_accel_dsa.so.4.0 00:29:37.727 SO libspdk_accel_error.so.1.0 00:29:37.727 SO libspdk_accel_iaa.so.2.0 00:29:37.986 SYMLINK libspdk_accel_ioat.so 00:29:37.986 SYMLINK libspdk_accel_dsa.so 00:29:37.986 SYMLINK libspdk_accel_error.so 00:29:37.986 SYMLINK libspdk_accel_iaa.so 00:29:37.986 CC module/bdev/gpt/gpt.o 00:29:37.986 CC module/bdev/malloc/bdev_malloc.o 00:29:37.986 CC module/bdev/error/vbdev_error.o 00:29:37.986 CC module/bdev/lvol/vbdev_lvol.o 00:29:37.986 CC module/bdev/delay/vbdev_delay.o 00:29:37.986 CC module/blobfs/bdev/blobfs_bdev.o 00:29:37.986 CC module/bdev/nvme/bdev_nvme.o 00:29:37.986 CC module/bdev/passthru/vbdev_passthru.o 00:29:37.986 CC module/bdev/null/bdev_null.o 00:29:38.244 LIB libspdk_sock_posix.a 00:29:38.244 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:29:38.244 SO libspdk_sock_posix.so.5.0 00:29:38.244 CC module/bdev/gpt/vbdev_gpt.o 00:29:38.244 CC module/bdev/error/vbdev_error_rpc.o 00:29:38.244 CC module/bdev/null/bdev_null_rpc.o 00:29:38.244 SYMLINK libspdk_sock_posix.so 00:29:38.244 CC module/bdev/delay/vbdev_delay_rpc.o 00:29:38.244 CC module/bdev/malloc/bdev_malloc_rpc.o 00:29:38.244 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:29:38.244 LIB libspdk_blobfs_bdev.a 00:29:38.502 SO libspdk_blobfs_bdev.so.5.0 00:29:38.502 LIB libspdk_bdev_error.a 00:29:38.502 LIB libspdk_bdev_delay.a 00:29:38.502 LIB libspdk_bdev_null.a 00:29:38.502 LIB libspdk_bdev_gpt.a 00:29:38.502 SYMLINK libspdk_blobfs_bdev.so 00:29:38.502 SO libspdk_bdev_error.so.5.0 00:29:38.502 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:29:38.502 CC module/bdev/nvme/bdev_nvme_rpc.o 00:29:38.502 SO libspdk_bdev_delay.so.5.0 00:29:38.502 SO libspdk_bdev_null.so.5.0 00:29:38.502 SO libspdk_bdev_gpt.so.5.0 00:29:38.502 CC module/bdev/raid/bdev_raid.o 00:29:38.502 LIB libspdk_bdev_passthru.a 00:29:38.502 LIB libspdk_bdev_malloc.a 00:29:38.502 SYMLINK libspdk_bdev_error.so 00:29:38.502 SYMLINK libspdk_bdev_delay.so 00:29:38.502 SYMLINK libspdk_bdev_null.so 00:29:38.502 CC module/bdev/nvme/nvme_rpc.o 00:29:38.502 SO libspdk_bdev_passthru.so.5.0 00:29:38.502 SO libspdk_bdev_malloc.so.5.0 00:29:38.502 SYMLINK libspdk_bdev_gpt.so 00:29:38.502 CC module/bdev/nvme/bdev_mdns_client.o 00:29:38.502 SYMLINK libspdk_bdev_passthru.so 00:29:38.760 SYMLINK libspdk_bdev_malloc.so 00:29:38.760 CC module/bdev/nvme/vbdev_opal.o 00:29:38.760 CC module/bdev/zone_block/vbdev_zone_block.o 00:29:38.760 CC module/bdev/split/vbdev_split.o 00:29:38.760 CC module/bdev/aio/bdev_aio.o 00:29:38.760 CC module/bdev/aio/bdev_aio_rpc.o 00:29:38.760 LIB libspdk_bdev_lvol.a 00:29:38.760 SO libspdk_bdev_lvol.so.5.0 00:29:38.760 CC module/bdev/nvme/vbdev_opal_rpc.o 00:29:39.018 CC module/bdev/raid/bdev_raid_rpc.o 00:29:39.018 CC module/bdev/split/vbdev_split_rpc.o 00:29:39.018 SYMLINK libspdk_bdev_lvol.so 00:29:39.018 CC module/bdev/raid/bdev_raid_sb.o 00:29:39.018 CC module/bdev/raid/raid0.o 00:29:39.018 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:29:39.018 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:29:39.018 LIB libspdk_bdev_aio.a 00:29:39.018 LIB libspdk_bdev_split.a 00:29:39.018 SO libspdk_bdev_aio.so.5.0 00:29:39.018 CC module/bdev/raid/raid1.o 00:29:39.276 SO libspdk_bdev_split.so.5.0 00:29:39.276 CC module/bdev/raid/concat.o 00:29:39.276 SYMLINK libspdk_bdev_aio.so 00:29:39.276 CC module/bdev/ftl/bdev_ftl.o 00:29:39.276 CC module/bdev/ftl/bdev_ftl_rpc.o 00:29:39.276 SYMLINK libspdk_bdev_split.so 00:29:39.276 LIB libspdk_bdev_zone_block.a 00:29:39.276 SO libspdk_bdev_zone_block.so.5.0 00:29:39.276 CC module/bdev/iscsi/bdev_iscsi.o 00:29:39.276 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:29:39.276 CC module/bdev/virtio/bdev_virtio_scsi.o 00:29:39.276 SYMLINK libspdk_bdev_zone_block.so 00:29:39.276 CC module/bdev/virtio/bdev_virtio_blk.o 00:29:39.276 CC module/bdev/virtio/bdev_virtio_rpc.o 00:29:39.534 LIB libspdk_bdev_raid.a 00:29:39.534 SO libspdk_bdev_raid.so.5.0 00:29:39.534 LIB libspdk_bdev_ftl.a 00:29:39.534 SO libspdk_bdev_ftl.so.5.0 00:29:39.534 SYMLINK libspdk_bdev_raid.so 00:29:39.534 SYMLINK libspdk_bdev_ftl.so 00:29:39.792 LIB libspdk_bdev_iscsi.a 00:29:39.792 SO libspdk_bdev_iscsi.so.5.0 00:29:39.792 SYMLINK libspdk_bdev_iscsi.so 00:29:39.792 LIB libspdk_bdev_virtio.a 00:29:40.050 SO libspdk_bdev_virtio.so.5.0 00:29:40.050 SYMLINK libspdk_bdev_virtio.so 00:29:40.308 LIB libspdk_bdev_nvme.a 00:29:40.308 SO libspdk_bdev_nvme.so.6.0 00:29:40.308 SYMLINK libspdk_bdev_nvme.so 00:29:40.566 CC module/event/subsystems/scheduler/scheduler.o 00:29:40.566 CC module/event/subsystems/vmd/vmd.o 00:29:40.566 CC module/event/subsystems/vmd/vmd_rpc.o 00:29:40.566 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:29:40.566 CC module/event/subsystems/sock/sock.o 00:29:40.825 CC module/event/subsystems/iobuf/iobuf.o 00:29:40.825 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:29:40.825 LIB libspdk_event_vhost_blk.a 00:29:40.825 LIB libspdk_event_vmd.a 00:29:40.825 LIB libspdk_event_sock.a 00:29:40.825 SO libspdk_event_vhost_blk.so.2.0 00:29:40.825 LIB libspdk_event_scheduler.a 00:29:40.825 SO libspdk_event_vmd.so.5.0 00:29:40.825 SO libspdk_event_sock.so.4.0 00:29:40.825 LIB libspdk_event_iobuf.a 00:29:40.825 SO libspdk_event_scheduler.so.3.0 00:29:40.825 SYMLINK libspdk_event_vhost_blk.so 00:29:40.825 SO libspdk_event_iobuf.so.2.0 00:29:40.825 SYMLINK libspdk_event_vmd.so 00:29:40.825 SYMLINK libspdk_event_sock.so 00:29:40.825 SYMLINK libspdk_event_scheduler.so 00:29:41.088 SYMLINK libspdk_event_iobuf.so 00:29:41.088 CC module/event/subsystems/accel/accel.o 00:29:41.348 LIB libspdk_event_accel.a 00:29:41.348 SO libspdk_event_accel.so.5.0 00:29:41.348 SYMLINK libspdk_event_accel.so 00:29:41.636 CC module/event/subsystems/bdev/bdev.o 00:29:41.894 LIB libspdk_event_bdev.a 00:29:41.894 SO libspdk_event_bdev.so.5.0 00:29:41.894 SYMLINK libspdk_event_bdev.so 00:29:42.153 CC module/event/subsystems/scsi/scsi.o 00:29:42.153 CC module/event/subsystems/nbd/nbd.o 00:29:42.153 CC module/event/subsystems/ublk/ublk.o 00:29:42.153 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:29:42.153 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:29:42.153 LIB libspdk_event_nbd.a 00:29:42.153 LIB libspdk_event_ublk.a 00:29:42.153 LIB libspdk_event_scsi.a 00:29:42.153 SO libspdk_event_nbd.so.5.0 00:29:42.153 SO libspdk_event_ublk.so.2.0 00:29:42.412 SO libspdk_event_scsi.so.5.0 00:29:42.412 SYMLINK libspdk_event_scsi.so 00:29:42.412 SYMLINK libspdk_event_nbd.so 00:29:42.412 LIB libspdk_event_nvmf.a 00:29:42.412 SYMLINK libspdk_event_ublk.so 00:29:42.412 SO libspdk_event_nvmf.so.5.0 00:29:42.412 SYMLINK libspdk_event_nvmf.so 00:29:42.412 CC module/event/subsystems/iscsi/iscsi.o 00:29:42.412 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:29:42.671 LIB libspdk_event_vhost_scsi.a 00:29:42.671 LIB libspdk_event_iscsi.a 00:29:42.671 SO libspdk_event_vhost_scsi.so.2.0 00:29:42.671 SO libspdk_event_iscsi.so.5.0 00:29:42.671 SYMLINK libspdk_event_iscsi.so 00:29:42.671 SYMLINK libspdk_event_vhost_scsi.so 00:29:42.930 SO libspdk.so.5.0 00:29:42.930 SYMLINK libspdk.so 00:29:42.930 CXX app/trace/trace.o 00:29:43.189 CC examples/accel/perf/accel_perf.o 00:29:43.189 CC examples/ioat/perf/perf.o 00:29:43.189 CC examples/vmd/lsvmd/lsvmd.o 00:29:43.189 CC examples/nvme/hello_world/hello_world.o 00:29:43.189 CC examples/sock/hello_world/hello_sock.o 00:29:43.189 CC examples/nvmf/nvmf/nvmf.o 00:29:43.189 CC examples/bdev/hello_world/hello_bdev.o 00:29:43.189 CC examples/blob/hello_world/hello_blob.o 00:29:43.189 CC test/accel/dif/dif.o 00:29:43.189 LINK lsvmd 00:29:43.561 LINK hello_world 00:29:43.561 LINK ioat_perf 00:29:43.561 LINK hello_sock 00:29:43.561 LINK hello_bdev 00:29:43.561 LINK hello_blob 00:29:43.561 LINK nvmf 00:29:43.561 LINK spdk_trace 00:29:43.561 LINK dif 00:29:43.561 CC examples/vmd/led/led.o 00:29:43.561 LINK accel_perf 00:29:43.561 CC examples/ioat/verify/verify.o 00:29:43.561 CC examples/nvme/reconnect/reconnect.o 00:29:43.865 CC examples/nvme/nvme_manage/nvme_manage.o 00:29:43.865 LINK led 00:29:43.865 CC examples/bdev/bdevperf/bdevperf.o 00:29:43.865 CC examples/blob/cli/blobcli.o 00:29:43.865 CC app/trace_record/trace_record.o 00:29:43.865 CC examples/util/zipf/zipf.o 00:29:43.865 LINK verify 00:29:43.865 CC test/app/bdev_svc/bdev_svc.o 00:29:43.865 CC examples/thread/thread/thread_ex.o 00:29:44.136 LINK zipf 00:29:44.136 LINK reconnect 00:29:44.136 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:29:44.136 LINK spdk_trace_record 00:29:44.136 CC test/app/histogram_perf/histogram_perf.o 00:29:44.136 LINK bdev_svc 00:29:44.136 LINK thread 00:29:44.136 LINK histogram_perf 00:29:44.136 LINK blobcli 00:29:44.404 LINK nvme_manage 00:29:44.404 CC test/bdev/bdevio/bdevio.o 00:29:44.404 CC app/nvmf_tgt/nvmf_main.o 00:29:44.404 CC test/blobfs/mkfs/mkfs.o 00:29:44.404 LINK nvme_fuzz 00:29:44.404 TEST_HEADER include/spdk/accel.h 00:29:44.404 TEST_HEADER include/spdk/accel_module.h 00:29:44.404 TEST_HEADER include/spdk/assert.h 00:29:44.404 TEST_HEADER include/spdk/barrier.h 00:29:44.404 TEST_HEADER include/spdk/base64.h 00:29:44.404 TEST_HEADER include/spdk/bdev.h 00:29:44.404 TEST_HEADER include/spdk/bdev_module.h 00:29:44.404 TEST_HEADER include/spdk/bdev_zone.h 00:29:44.404 TEST_HEADER include/spdk/bit_array.h 00:29:44.404 TEST_HEADER include/spdk/bit_pool.h 00:29:44.404 TEST_HEADER include/spdk/blob_bdev.h 00:29:44.404 TEST_HEADER include/spdk/blobfs_bdev.h 00:29:44.404 TEST_HEADER include/spdk/blobfs.h 00:29:44.404 LINK bdevperf 00:29:44.404 TEST_HEADER include/spdk/blob.h 00:29:44.404 TEST_HEADER include/spdk/conf.h 00:29:44.404 TEST_HEADER include/spdk/config.h 00:29:44.404 TEST_HEADER include/spdk/cpuset.h 00:29:44.404 TEST_HEADER include/spdk/crc16.h 00:29:44.404 TEST_HEADER include/spdk/crc32.h 00:29:44.404 TEST_HEADER include/spdk/crc64.h 00:29:44.404 TEST_HEADER include/spdk/dif.h 00:29:44.404 TEST_HEADER include/spdk/dma.h 00:29:44.404 TEST_HEADER include/spdk/endian.h 00:29:44.404 TEST_HEADER include/spdk/env_dpdk.h 00:29:44.404 TEST_HEADER include/spdk/env.h 00:29:44.671 TEST_HEADER include/spdk/event.h 00:29:44.671 TEST_HEADER include/spdk/fd_group.h 00:29:44.671 TEST_HEADER include/spdk/fd.h 00:29:44.671 TEST_HEADER include/spdk/file.h 00:29:44.671 TEST_HEADER include/spdk/ftl.h 00:29:44.671 CC test/app/jsoncat/jsoncat.o 00:29:44.671 LINK nvmf_tgt 00:29:44.671 TEST_HEADER include/spdk/gpt_spec.h 00:29:44.671 TEST_HEADER include/spdk/hexlify.h 00:29:44.671 TEST_HEADER include/spdk/histogram_data.h 00:29:44.671 TEST_HEADER include/spdk/idxd.h 00:29:44.671 TEST_HEADER include/spdk/idxd_spec.h 00:29:44.671 TEST_HEADER include/spdk/init.h 00:29:44.671 TEST_HEADER include/spdk/ioat.h 00:29:44.671 TEST_HEADER include/spdk/ioat_spec.h 00:29:44.671 TEST_HEADER include/spdk/iscsi_spec.h 00:29:44.671 TEST_HEADER include/spdk/json.h 00:29:44.671 TEST_HEADER include/spdk/jsonrpc.h 00:29:44.671 CC test/dma/test_dma/test_dma.o 00:29:44.671 TEST_HEADER include/spdk/likely.h 00:29:44.671 TEST_HEADER include/spdk/log.h 00:29:44.671 TEST_HEADER include/spdk/lvol.h 00:29:44.671 CC examples/nvme/arbitration/arbitration.o 00:29:44.671 TEST_HEADER include/spdk/memory.h 00:29:44.671 TEST_HEADER include/spdk/mmio.h 00:29:44.671 TEST_HEADER include/spdk/nbd.h 00:29:44.671 TEST_HEADER include/spdk/notify.h 00:29:44.671 TEST_HEADER include/spdk/nvme.h 00:29:44.671 TEST_HEADER include/spdk/nvme_intel.h 00:29:44.671 TEST_HEADER include/spdk/nvme_ocssd.h 00:29:44.671 LINK mkfs 00:29:44.671 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:29:44.671 TEST_HEADER include/spdk/nvme_spec.h 00:29:44.671 TEST_HEADER include/spdk/nvme_zns.h 00:29:44.671 TEST_HEADER include/spdk/nvmf_cmd.h 00:29:44.671 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:29:44.671 TEST_HEADER include/spdk/nvmf.h 00:29:44.671 TEST_HEADER include/spdk/nvmf_spec.h 00:29:44.671 TEST_HEADER include/spdk/nvmf_transport.h 00:29:44.671 TEST_HEADER include/spdk/opal.h 00:29:44.671 TEST_HEADER include/spdk/opal_spec.h 00:29:44.671 TEST_HEADER include/spdk/pci_ids.h 00:29:44.671 TEST_HEADER include/spdk/pipe.h 00:29:44.671 CC test/env/mem_callbacks/mem_callbacks.o 00:29:44.671 TEST_HEADER include/spdk/queue.h 00:29:44.671 TEST_HEADER include/spdk/reduce.h 00:29:44.671 TEST_HEADER include/spdk/rpc.h 00:29:44.671 TEST_HEADER include/spdk/scheduler.h 00:29:44.671 TEST_HEADER include/spdk/scsi.h 00:29:44.671 TEST_HEADER include/spdk/scsi_spec.h 00:29:44.671 TEST_HEADER include/spdk/sock.h 00:29:44.671 TEST_HEADER include/spdk/stdinc.h 00:29:44.671 TEST_HEADER include/spdk/string.h 00:29:44.671 TEST_HEADER include/spdk/thread.h 00:29:44.671 TEST_HEADER include/spdk/trace.h 00:29:44.671 TEST_HEADER include/spdk/trace_parser.h 00:29:44.671 TEST_HEADER include/spdk/tree.h 00:29:44.671 TEST_HEADER include/spdk/ublk.h 00:29:44.671 TEST_HEADER include/spdk/util.h 00:29:44.671 TEST_HEADER include/spdk/uuid.h 00:29:44.671 TEST_HEADER include/spdk/version.h 00:29:44.671 TEST_HEADER include/spdk/vfio_user_pci.h 00:29:44.671 TEST_HEADER include/spdk/vfio_user_spec.h 00:29:44.671 TEST_HEADER include/spdk/vhost.h 00:29:44.671 TEST_HEADER include/spdk/vmd.h 00:29:44.671 TEST_HEADER include/spdk/xor.h 00:29:44.671 TEST_HEADER include/spdk/zipf.h 00:29:44.671 CXX test/cpp_headers/accel.o 00:29:44.671 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:29:44.671 LINK jsoncat 00:29:44.671 LINK bdevio 00:29:44.671 CC test/app/stub/stub.o 00:29:44.929 LINK mem_callbacks 00:29:44.930 CXX test/cpp_headers/accel_module.o 00:29:44.930 LINK arbitration 00:29:44.930 CXX test/cpp_headers/assert.o 00:29:44.930 CC app/iscsi_tgt/iscsi_tgt.o 00:29:44.930 LINK stub 00:29:44.930 LINK test_dma 00:29:44.930 CC app/spdk_tgt/spdk_tgt.o 00:29:44.930 CC app/spdk_lspci/spdk_lspci.o 00:29:44.930 CC test/env/vtophys/vtophys.o 00:29:45.188 CXX test/cpp_headers/barrier.o 00:29:45.188 LINK iscsi_tgt 00:29:45.188 CC examples/nvme/hotplug/hotplug.o 00:29:45.188 CC examples/idxd/perf/perf.o 00:29:45.188 CXX test/cpp_headers/base64.o 00:29:45.188 LINK spdk_lspci 00:29:45.188 LINK vtophys 00:29:45.447 LINK spdk_tgt 00:29:45.447 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:29:45.447 CXX test/cpp_headers/bdev.o 00:29:45.447 LINK hotplug 00:29:45.447 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:29:45.447 CC app/spdk_nvme_perf/perf.o 00:29:45.447 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:29:45.706 CXX test/cpp_headers/bdev_module.o 00:29:45.706 LINK idxd_perf 00:29:45.706 CC app/spdk_nvme_identify/identify.o 00:29:45.706 CC app/spdk_nvme_discover/discovery_aer.o 00:29:45.706 LINK env_dpdk_post_init 00:29:45.706 CC examples/nvme/cmb_copy/cmb_copy.o 00:29:45.706 CXX test/cpp_headers/bdev_zone.o 00:29:45.965 LINK spdk_nvme_discover 00:29:45.965 CC test/env/memory/memory_ut.o 00:29:45.965 LINK cmb_copy 00:29:45.965 CC app/spdk_top/spdk_top.o 00:29:45.965 LINK vhost_fuzz 00:29:45.965 CXX test/cpp_headers/bit_array.o 00:29:46.223 CC test/env/pci/pci_ut.o 00:29:46.223 CC examples/nvme/abort/abort.o 00:29:46.223 CC app/vhost/vhost.o 00:29:46.223 CXX test/cpp_headers/bit_pool.o 00:29:46.481 LINK iscsi_fuzz 00:29:46.481 LINK spdk_nvme_identify 00:29:46.481 LINK spdk_nvme_perf 00:29:46.481 CXX test/cpp_headers/blob_bdev.o 00:29:46.481 LINK vhost 00:29:46.481 LINK memory_ut 00:29:46.481 LINK pci_ut 00:29:46.739 CXX test/cpp_headers/blobfs_bdev.o 00:29:46.739 LINK abort 00:29:46.739 CC test/rpc_client/rpc_client_test.o 00:29:46.998 CC test/event/event_perf/event_perf.o 00:29:46.998 CXX test/cpp_headers/blobfs.o 00:29:46.998 CC test/nvme/aer/aer.o 00:29:46.998 CC test/thread/poller_perf/poller_perf.o 00:29:46.998 CC test/lvol/esnap/esnap.o 00:29:46.998 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:29:46.998 LINK spdk_top 00:29:46.998 CC test/nvme/reset/reset.o 00:29:46.998 LINK event_perf 00:29:46.998 LINK rpc_client_test 00:29:46.998 CXX test/cpp_headers/blob.o 00:29:46.998 LINK poller_perf 00:29:47.256 LINK pmr_persistence 00:29:47.256 LINK aer 00:29:47.256 CXX test/cpp_headers/conf.o 00:29:47.256 CC test/event/reactor/reactor.o 00:29:47.256 CC app/spdk_dd/spdk_dd.o 00:29:47.256 CC test/nvme/sgl/sgl.o 00:29:47.256 LINK reset 00:29:47.256 CC examples/interrupt_tgt/interrupt_tgt.o 00:29:47.513 CXX test/cpp_headers/config.o 00:29:47.513 LINK reactor 00:29:47.513 CC test/nvme/e2edp/nvme_dp.o 00:29:47.513 CXX test/cpp_headers/cpuset.o 00:29:47.513 CC test/nvme/overhead/overhead.o 00:29:47.513 CC app/fio/nvme/fio_plugin.o 00:29:47.513 LINK sgl 00:29:47.513 LINK interrupt_tgt 00:29:47.513 CXX test/cpp_headers/crc16.o 00:29:47.513 LINK spdk_dd 00:29:47.513 CC test/event/reactor_perf/reactor_perf.o 00:29:47.771 LINK nvme_dp 00:29:47.771 CXX test/cpp_headers/crc32.o 00:29:47.771 CC test/nvme/err_injection/err_injection.o 00:29:47.771 LINK reactor_perf 00:29:47.771 CC test/nvme/startup/startup.o 00:29:47.771 LINK overhead 00:29:47.771 CXX test/cpp_headers/crc64.o 00:29:48.029 CC test/nvme/reserve/reserve.o 00:29:48.029 CXX test/cpp_headers/dif.o 00:29:48.029 LINK err_injection 00:29:48.029 CC test/event/app_repeat/app_repeat.o 00:29:48.029 LINK startup 00:29:48.287 LINK spdk_nvme 00:29:48.287 CC test/nvme/simple_copy/simple_copy.o 00:29:48.287 CC app/fio/bdev/fio_plugin.o 00:29:48.287 LINK app_repeat 00:29:48.287 CXX test/cpp_headers/dma.o 00:29:48.287 LINK reserve 00:29:48.287 CC test/nvme/connect_stress/connect_stress.o 00:29:48.287 CC test/nvme/boot_partition/boot_partition.o 00:29:48.545 CC test/nvme/compliance/nvme_compliance.o 00:29:48.545 LINK simple_copy 00:29:48.545 CXX test/cpp_headers/endian.o 00:29:48.545 CXX test/cpp_headers/env_dpdk.o 00:29:48.545 LINK connect_stress 00:29:48.545 LINK boot_partition 00:29:48.545 CC test/event/scheduler/scheduler.o 00:29:48.804 CC test/nvme/fused_ordering/fused_ordering.o 00:29:48.804 CXX test/cpp_headers/env.o 00:29:48.804 CC test/nvme/doorbell_aers/doorbell_aers.o 00:29:48.804 LINK spdk_bdev 00:29:48.804 LINK nvme_compliance 00:29:48.804 CC test/nvme/fdp/fdp.o 00:29:48.804 CC test/nvme/cuse/cuse.o 00:29:48.804 CXX test/cpp_headers/event.o 00:29:48.804 LINK scheduler 00:29:48.804 CXX test/cpp_headers/fd_group.o 00:29:49.062 LINK doorbell_aers 00:29:49.062 LINK fused_ordering 00:29:49.062 CXX test/cpp_headers/fd.o 00:29:49.062 CXX test/cpp_headers/file.o 00:29:49.062 CXX test/cpp_headers/ftl.o 00:29:49.062 CXX test/cpp_headers/gpt_spec.o 00:29:49.062 CXX test/cpp_headers/hexlify.o 00:29:49.320 LINK fdp 00:29:49.320 CXX test/cpp_headers/histogram_data.o 00:29:49.320 CXX test/cpp_headers/idxd.o 00:29:49.320 CXX test/cpp_headers/idxd_spec.o 00:29:49.320 CXX test/cpp_headers/init.o 00:29:49.578 CXX test/cpp_headers/ioat.o 00:29:49.578 CXX test/cpp_headers/ioat_spec.o 00:29:49.578 CXX test/cpp_headers/iscsi_spec.o 00:29:49.578 CXX test/cpp_headers/json.o 00:29:49.578 CXX test/cpp_headers/jsonrpc.o 00:29:49.578 CXX test/cpp_headers/likely.o 00:29:49.578 CXX test/cpp_headers/log.o 00:29:49.836 CXX test/cpp_headers/lvol.o 00:29:49.836 CXX test/cpp_headers/memory.o 00:29:49.836 CXX test/cpp_headers/mmio.o 00:29:49.836 CXX test/cpp_headers/nbd.o 00:29:49.836 CXX test/cpp_headers/notify.o 00:29:49.836 CXX test/cpp_headers/nvme.o 00:29:49.836 CXX test/cpp_headers/nvme_intel.o 00:29:49.836 CXX test/cpp_headers/nvme_ocssd.o 00:29:50.095 CXX test/cpp_headers/nvme_ocssd_spec.o 00:29:50.095 CXX test/cpp_headers/nvme_spec.o 00:29:50.095 CXX test/cpp_headers/nvme_zns.o 00:29:50.095 CXX test/cpp_headers/nvmf_cmd.o 00:29:50.095 CXX test/cpp_headers/nvmf_fc_spec.o 00:29:50.095 CXX test/cpp_headers/nvmf.o 00:29:50.095 CXX test/cpp_headers/nvmf_spec.o 00:29:50.354 CXX test/cpp_headers/nvmf_transport.o 00:29:50.354 LINK cuse 00:29:50.354 CXX test/cpp_headers/opal.o 00:29:50.354 CXX test/cpp_headers/opal_spec.o 00:29:50.354 CXX test/cpp_headers/pci_ids.o 00:29:50.354 CXX test/cpp_headers/pipe.o 00:29:50.354 CXX test/cpp_headers/queue.o 00:29:50.354 CXX test/cpp_headers/reduce.o 00:29:50.354 CXX test/cpp_headers/rpc.o 00:29:50.354 CXX test/cpp_headers/scheduler.o 00:29:50.354 CXX test/cpp_headers/scsi.o 00:29:50.612 CXX test/cpp_headers/scsi_spec.o 00:29:50.612 CXX test/cpp_headers/sock.o 00:29:50.612 CXX test/cpp_headers/stdinc.o 00:29:50.612 CXX test/cpp_headers/string.o 00:29:50.612 CXX test/cpp_headers/thread.o 00:29:50.612 CXX test/cpp_headers/trace.o 00:29:50.612 CXX test/cpp_headers/trace_parser.o 00:29:50.871 CXX test/cpp_headers/tree.o 00:29:50.871 CXX test/cpp_headers/ublk.o 00:29:50.871 CXX test/cpp_headers/util.o 00:29:50.871 CXX test/cpp_headers/uuid.o 00:29:50.871 CXX test/cpp_headers/version.o 00:29:50.871 CXX test/cpp_headers/vfio_user_pci.o 00:29:50.871 CXX test/cpp_headers/vfio_user_spec.o 00:29:50.871 CXX test/cpp_headers/vhost.o 00:29:50.871 CXX test/cpp_headers/vmd.o 00:29:50.871 CXX test/cpp_headers/xor.o 00:29:50.871 CXX test/cpp_headers/zipf.o 00:29:52.250 LINK esnap 00:29:52.508 00:29:52.508 real 0m53.905s 00:29:52.508 user 5m14.108s 00:29:52.508 sys 1m3.611s 00:29:52.508 12:52:11 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:29:52.508 ************************************ 00:29:52.508 END TEST make 00:29:52.508 ************************************ 00:29:52.508 12:52:11 -- common/autotest_common.sh@10 -- $ set +x 00:29:52.767 12:52:12 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:52.767 12:52:12 -- nvmf/common.sh@7 -- # uname -s 00:29:52.767 12:52:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:52.767 12:52:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:52.767 12:52:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:52.767 12:52:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:52.767 12:52:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:52.767 12:52:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:52.767 12:52:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:52.767 12:52:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:52.767 12:52:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:52.767 12:52:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:52.767 12:52:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:864ae4a3-cd96-439b-8ef2-6dab4d992115 00:29:52.767 12:52:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=864ae4a3-cd96-439b-8ef2-6dab4d992115 00:29:52.767 12:52:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:52.767 12:52:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:52.767 12:52:12 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:52.767 12:52:12 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:52.767 12:52:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:52.767 12:52:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:52.767 12:52:12 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:52.767 12:52:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.767 12:52:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.767 12:52:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.767 12:52:12 -- paths/export.sh@5 -- # export PATH 00:29:52.767 12:52:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.767 12:52:12 -- nvmf/common.sh@46 -- # : 0 00:29:52.767 12:52:12 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:29:52.767 12:52:12 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:29:52.767 12:52:12 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:29:52.767 12:52:12 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:52.767 12:52:12 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:52.767 12:52:12 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:29:52.767 12:52:12 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:29:52.767 12:52:12 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:29:52.767 12:52:12 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:29:52.767 12:52:12 -- spdk/autotest.sh@32 -- # uname -s 00:29:52.768 12:52:12 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:29:52.768 12:52:12 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:29:52.768 12:52:12 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:29:52.768 12:52:12 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:29:52.768 12:52:12 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:29:52.768 12:52:12 -- spdk/autotest.sh@44 -- # modprobe nbd 00:29:52.768 12:52:12 -- spdk/autotest.sh@46 -- # type -P udevadm 00:29:52.768 12:52:12 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:29:52.768 12:52:12 -- spdk/autotest.sh@48 -- # udevadm_pid=61499 00:29:52.768 12:52:12 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:29:52.768 12:52:12 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:29:52.768 12:52:12 -- spdk/autotest.sh@54 -- # echo 61529 00:29:52.768 12:52:12 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:29:52.768 12:52:12 -- spdk/autotest.sh@56 -- # echo 61530 00:29:52.768 12:52:12 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:29:52.768 12:52:12 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:29:52.768 12:52:12 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:29:52.768 12:52:12 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:29:52.768 12:52:12 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:52.768 12:52:12 -- common/autotest_common.sh@10 -- # set +x 00:29:52.768 12:52:12 -- spdk/autotest.sh@70 -- # create_test_list 00:29:52.768 12:52:12 -- common/autotest_common.sh@736 -- # xtrace_disable 00:29:52.768 12:52:12 -- common/autotest_common.sh@10 -- # set +x 00:29:52.768 12:52:12 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:29:52.768 12:52:12 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:29:52.768 12:52:12 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:29:52.768 12:52:12 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:29:52.768 12:52:12 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:29:52.768 12:52:12 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:29:52.768 12:52:12 -- common/autotest_common.sh@1440 -- # uname 00:29:52.768 12:52:12 -- common/autotest_common.sh@1440 -- # '[' Linux = FreeBSD ']' 00:29:52.768 12:52:12 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:29:52.768 12:52:12 -- common/autotest_common.sh@1460 -- # uname 00:29:52.768 12:52:12 -- common/autotest_common.sh@1460 -- # [[ Linux = FreeBSD ]] 00:29:52.768 12:52:12 -- spdk/autotest.sh@82 -- # grep CC_TYPE mk/cc.mk 00:29:52.768 12:52:12 -- spdk/autotest.sh@82 -- # CC_TYPE=CC_TYPE=gcc 00:29:52.768 12:52:12 -- spdk/autotest.sh@83 -- # hash lcov 00:29:52.768 12:52:12 -- spdk/autotest.sh@83 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:29:52.768 12:52:12 -- spdk/autotest.sh@91 -- # export 'LCOV_OPTS= 00:29:52.768 --rc lcov_branch_coverage=1 00:29:52.768 --rc lcov_function_coverage=1 00:29:52.768 --rc genhtml_branch_coverage=1 00:29:52.768 --rc genhtml_function_coverage=1 00:29:52.768 --rc genhtml_legend=1 00:29:52.768 --rc geninfo_all_blocks=1 00:29:52.768 ' 00:29:52.768 12:52:12 -- spdk/autotest.sh@91 -- # LCOV_OPTS=' 00:29:52.768 --rc lcov_branch_coverage=1 00:29:52.768 --rc lcov_function_coverage=1 00:29:52.768 --rc genhtml_branch_coverage=1 00:29:52.768 --rc genhtml_function_coverage=1 00:29:52.768 --rc genhtml_legend=1 00:29:52.768 --rc geninfo_all_blocks=1 00:29:52.768 ' 00:29:52.768 12:52:12 -- spdk/autotest.sh@92 -- # export 'LCOV=lcov 00:29:52.768 --rc lcov_branch_coverage=1 00:29:52.768 --rc lcov_function_coverage=1 00:29:52.768 --rc genhtml_branch_coverage=1 00:29:52.768 --rc genhtml_function_coverage=1 00:29:52.768 --rc genhtml_legend=1 00:29:52.768 --rc geninfo_all_blocks=1 00:29:52.768 --no-external' 00:29:52.768 12:52:12 -- spdk/autotest.sh@92 -- # LCOV='lcov 00:29:52.768 --rc lcov_branch_coverage=1 00:29:52.768 --rc lcov_function_coverage=1 00:29:52.768 --rc genhtml_branch_coverage=1 00:29:52.768 --rc genhtml_function_coverage=1 00:29:52.768 --rc genhtml_legend=1 00:29:52.768 --rc geninfo_all_blocks=1 00:29:52.768 --no-external' 00:29:52.768 12:52:12 -- spdk/autotest.sh@94 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:29:53.026 lcov: LCOV version 1.14 00:29:53.026 12:52:12 -- spdk/autotest.sh@96 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:30:01.139 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:30:01.139 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:30:01.139 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:30:01.139 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:30:01.139 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:30:01.139 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:30:19.230 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:30:19.230 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:30:19.230 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:30:19.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:30:19.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:30:19.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:30:19.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:30:19.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:30:19.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:30:19.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:30:19.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:30:19.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:30:19.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:30:19.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:30:19.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:30:19.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:30:19.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:30:19.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:30:19.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:30:19.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:30:19.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:30:19.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:30:19.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:30:19.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:30:19.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:30:19.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:30:19.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:30:19.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:30:19.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:30:19.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:30:19.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:30:19.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:30:19.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:30:19.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:30:19.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:30:19.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:30:19.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:30:19.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:30:19.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:30:19.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:30:19.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:30:19.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:30:19.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:30:19.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:30:19.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:30:19.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:30:19.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:30:19.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:30:19.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:30:19.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:30:19.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:30:19.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:30:19.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:30:19.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:30:19.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:30:19.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:30:19.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:30:19.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:30:19.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:30:19.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:30:19.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:30:19.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:30:19.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:30:19.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:30:19.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:30:19.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:30:19.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:30:19.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:30:19.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:30:19.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:30:19.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:30:19.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:30:19.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:30:19.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:30:19.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:30:19.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:30:19.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:30:19.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:30:19.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:30:19.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:30:19.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:30:19.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:30:19.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:30:19.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:30:19.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:30:19.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:30:19.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:30:19.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:30:19.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:30:19.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:30:19.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:30:19.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:30:19.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:30:19.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:30:19.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:30:19.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:30:19.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:30:19.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:30:19.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:30:19.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:30:19.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:30:19.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:30:19.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:30:19.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:30:19.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:30:19.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:30:19.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:30:19.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:30:19.232 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:30:19.232 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:30:19.232 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:30:19.232 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:30:19.232 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:30:19.232 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:30:19.232 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:30:19.232 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:30:19.232 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:30:19.232 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:30:19.232 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:30:19.232 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:30:19.232 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:30:19.232 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:30:19.232 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:30:19.232 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:30:19.232 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:30:19.232 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:30:19.232 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:30:19.232 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:30:19.232 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:30:19.232 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:30:19.232 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:30:19.232 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:30:19.232 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:30:19.232 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:30:19.232 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:30:19.232 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:30:19.232 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:30:19.232 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:30:19.232 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:30:19.232 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:30:19.232 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:30:19.232 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:30:19.232 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:30:19.232 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:30:19.232 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:30:19.232 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:30:19.232 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:30:19.232 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:30:19.232 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:30:19.232 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:30:19.232 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:30:19.232 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:30:19.232 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:30:19.232 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:30:19.232 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:30:19.232 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:30:19.232 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:30:19.232 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:30:19.232 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:30:19.232 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:30:19.232 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:30:19.232 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:30:19.232 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:30:19.232 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:30:19.232 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:30:19.232 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:30:19.232 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:30:19.232 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:30:19.232 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:30:19.232 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:30:19.232 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:30:19.232 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:30:21.768 12:52:41 -- spdk/autotest.sh@100 -- # timing_enter pre_cleanup 00:30:21.768 12:52:41 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:21.768 12:52:41 -- common/autotest_common.sh@10 -- # set +x 00:30:21.768 12:52:41 -- spdk/autotest.sh@102 -- # rm -f 00:30:21.768 12:52:41 -- spdk/autotest.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:30:22.336 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:22.595 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:30:22.595 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:30:22.595 12:52:41 -- spdk/autotest.sh@107 -- # get_zoned_devs 00:30:22.596 12:52:41 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:30:22.596 12:52:41 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:30:22.596 12:52:41 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:30:22.596 12:52:41 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:30:22.596 12:52:41 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:30:22.596 12:52:41 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:30:22.596 12:52:41 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:30:22.596 12:52:41 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:30:22.596 12:52:41 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:30:22.596 12:52:41 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:30:22.596 12:52:41 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:30:22.596 12:52:41 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:30:22.596 12:52:41 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:30:22.596 12:52:41 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:30:22.596 12:52:41 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n2 00:30:22.596 12:52:41 -- common/autotest_common.sh@1647 -- # local device=nvme1n2 00:30:22.596 12:52:41 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:30:22.596 12:52:41 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:30:22.596 12:52:41 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:30:22.596 12:52:41 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n3 00:30:22.596 12:52:41 -- common/autotest_common.sh@1647 -- # local device=nvme1n3 00:30:22.596 12:52:41 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:30:22.596 12:52:41 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:30:22.596 12:52:41 -- spdk/autotest.sh@109 -- # (( 0 > 0 )) 00:30:22.596 12:52:41 -- spdk/autotest.sh@121 -- # ls /dev/nvme0n1 /dev/nvme1n1 /dev/nvme1n2 /dev/nvme1n3 00:30:22.596 12:52:41 -- spdk/autotest.sh@121 -- # grep -v p 00:30:22.596 12:52:41 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:30:22.596 12:52:41 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:30:22.596 12:52:41 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme0n1 00:30:22.596 12:52:41 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:30:22.596 12:52:41 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:30:22.596 No valid GPT data, bailing 00:30:22.596 12:52:41 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:30:22.596 12:52:41 -- scripts/common.sh@393 -- # pt= 00:30:22.596 12:52:41 -- scripts/common.sh@394 -- # return 1 00:30:22.596 12:52:41 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:30:22.596 1+0 records in 00:30:22.596 1+0 records out 00:30:22.596 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00538497 s, 195 MB/s 00:30:22.596 12:52:41 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:30:22.596 12:52:41 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:30:22.596 12:52:41 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme1n1 00:30:22.596 12:52:41 -- scripts/common.sh@380 -- # local block=/dev/nvme1n1 pt 00:30:22.596 12:52:41 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:30:22.596 No valid GPT data, bailing 00:30:22.596 12:52:42 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:30:22.855 12:52:42 -- scripts/common.sh@393 -- # pt= 00:30:22.855 12:52:42 -- scripts/common.sh@394 -- # return 1 00:30:22.855 12:52:42 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:30:22.855 1+0 records in 00:30:22.855 1+0 records out 00:30:22.855 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00488944 s, 214 MB/s 00:30:22.855 12:52:42 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:30:22.855 12:52:42 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:30:22.855 12:52:42 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme1n2 00:30:22.855 12:52:42 -- scripts/common.sh@380 -- # local block=/dev/nvme1n2 pt 00:30:22.855 12:52:42 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:30:22.855 No valid GPT data, bailing 00:30:22.855 12:52:42 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:30:22.855 12:52:42 -- scripts/common.sh@393 -- # pt= 00:30:22.855 12:52:42 -- scripts/common.sh@394 -- # return 1 00:30:22.855 12:52:42 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:30:22.855 1+0 records in 00:30:22.855 1+0 records out 00:30:22.855 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00466944 s, 225 MB/s 00:30:22.855 12:52:42 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:30:22.855 12:52:42 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:30:22.855 12:52:42 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme1n3 00:30:22.855 12:52:42 -- scripts/common.sh@380 -- # local block=/dev/nvme1n3 pt 00:30:22.855 12:52:42 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:30:22.855 No valid GPT data, bailing 00:30:22.855 12:52:42 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:30:22.855 12:52:42 -- scripts/common.sh@393 -- # pt= 00:30:22.855 12:52:42 -- scripts/common.sh@394 -- # return 1 00:30:22.855 12:52:42 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:30:22.855 1+0 records in 00:30:22.855 1+0 records out 00:30:22.855 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00504078 s, 208 MB/s 00:30:22.855 12:52:42 -- spdk/autotest.sh@129 -- # sync 00:30:23.114 12:52:42 -- spdk/autotest.sh@131 -- # xtrace_disable_per_cmd reap_spdk_processes 00:30:23.114 12:52:42 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:30:23.114 12:52:42 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:30:25.022 12:52:44 -- spdk/autotest.sh@135 -- # uname -s 00:30:25.022 12:52:44 -- spdk/autotest.sh@135 -- # '[' Linux = Linux ']' 00:30:25.022 12:52:44 -- spdk/autotest.sh@136 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:30:25.022 12:52:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:25.022 12:52:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:25.022 12:52:44 -- common/autotest_common.sh@10 -- # set +x 00:30:25.022 ************************************ 00:30:25.022 START TEST setup.sh 00:30:25.022 ************************************ 00:30:25.022 12:52:44 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:30:25.022 * Looking for test storage... 00:30:25.022 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:30:25.022 12:52:44 -- setup/test-setup.sh@10 -- # uname -s 00:30:25.022 12:52:44 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:30:25.022 12:52:44 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:30:25.022 12:52:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:25.022 12:52:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:25.022 12:52:44 -- common/autotest_common.sh@10 -- # set +x 00:30:25.281 ************************************ 00:30:25.281 START TEST acl 00:30:25.281 ************************************ 00:30:25.281 12:52:44 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:30:25.281 * Looking for test storage... 00:30:25.281 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:30:25.281 12:52:44 -- setup/acl.sh@10 -- # get_zoned_devs 00:30:25.281 12:52:44 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:30:25.281 12:52:44 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:30:25.281 12:52:44 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:30:25.281 12:52:44 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:30:25.281 12:52:44 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:30:25.281 12:52:44 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:30:25.281 12:52:44 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:30:25.281 12:52:44 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:30:25.281 12:52:44 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:30:25.281 12:52:44 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:30:25.281 12:52:44 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:30:25.281 12:52:44 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:30:25.281 12:52:44 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:30:25.281 12:52:44 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:30:25.281 12:52:44 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n2 00:30:25.281 12:52:44 -- common/autotest_common.sh@1647 -- # local device=nvme1n2 00:30:25.281 12:52:44 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:30:25.281 12:52:44 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:30:25.281 12:52:44 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:30:25.281 12:52:44 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n3 00:30:25.281 12:52:44 -- common/autotest_common.sh@1647 -- # local device=nvme1n3 00:30:25.281 12:52:44 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:30:25.281 12:52:44 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:30:25.281 12:52:44 -- setup/acl.sh@12 -- # devs=() 00:30:25.281 12:52:44 -- setup/acl.sh@12 -- # declare -a devs 00:30:25.281 12:52:44 -- setup/acl.sh@13 -- # drivers=() 00:30:25.281 12:52:44 -- setup/acl.sh@13 -- # declare -A drivers 00:30:25.281 12:52:44 -- setup/acl.sh@51 -- # setup reset 00:30:25.281 12:52:44 -- setup/common.sh@9 -- # [[ reset == output ]] 00:30:25.281 12:52:44 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:30:25.850 12:52:45 -- setup/acl.sh@52 -- # collect_setup_devs 00:30:25.850 12:52:45 -- setup/acl.sh@16 -- # local dev driver 00:30:25.850 12:52:45 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:30:25.850 12:52:45 -- setup/acl.sh@15 -- # setup output status 00:30:25.850 12:52:45 -- setup/common.sh@9 -- # [[ output == output ]] 00:30:25.850 12:52:45 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:30:26.108 Hugepages 00:30:26.108 node hugesize free / total 00:30:26.108 12:52:45 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:30:26.108 12:52:45 -- setup/acl.sh@19 -- # continue 00:30:26.108 12:52:45 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:30:26.108 00:30:26.108 Type BDF Vendor Device NUMA Driver Device Block devices 00:30:26.108 12:52:45 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:30:26.108 12:52:45 -- setup/acl.sh@19 -- # continue 00:30:26.108 12:52:45 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:30:26.108 12:52:45 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:30:26.108 12:52:45 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:30:26.108 12:52:45 -- setup/acl.sh@20 -- # continue 00:30:26.108 12:52:45 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:30:26.365 12:52:45 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:30:26.365 12:52:45 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:30:26.365 12:52:45 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:30:26.365 12:52:45 -- setup/acl.sh@22 -- # devs+=("$dev") 00:30:26.365 12:52:45 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:30:26.365 12:52:45 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:30:26.365 12:52:45 -- setup/acl.sh@19 -- # [[ 0000:00:07.0 == *:*:*.* ]] 00:30:26.365 12:52:45 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:30:26.365 12:52:45 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:30:26.365 12:52:45 -- setup/acl.sh@22 -- # devs+=("$dev") 00:30:26.365 12:52:45 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:30:26.365 12:52:45 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:30:26.365 12:52:45 -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:30:26.365 12:52:45 -- setup/acl.sh@54 -- # run_test denied denied 00:30:26.365 12:52:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:26.365 12:52:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:26.365 12:52:45 -- common/autotest_common.sh@10 -- # set +x 00:30:26.365 ************************************ 00:30:26.365 START TEST denied 00:30:26.365 ************************************ 00:30:26.365 12:52:45 -- common/autotest_common.sh@1104 -- # denied 00:30:26.365 12:52:45 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:30:26.365 12:52:45 -- setup/acl.sh@38 -- # setup output config 00:30:26.365 12:52:45 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:30:26.365 12:52:45 -- setup/common.sh@9 -- # [[ output == output ]] 00:30:26.365 12:52:45 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:30:27.299 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:30:27.299 12:52:46 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:30:27.299 12:52:46 -- setup/acl.sh@28 -- # local dev driver 00:30:27.299 12:52:46 -- setup/acl.sh@30 -- # for dev in "$@" 00:30:27.299 12:52:46 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:30:27.299 12:52:46 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:30:27.299 12:52:46 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:30:27.299 12:52:46 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:30:27.299 12:52:46 -- setup/acl.sh@41 -- # setup reset 00:30:27.299 12:52:46 -- setup/common.sh@9 -- # [[ reset == output ]] 00:30:27.299 12:52:46 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:30:27.867 00:30:27.867 real 0m1.461s 00:30:27.867 user 0m0.589s 00:30:27.867 sys 0m0.829s 00:30:27.867 12:52:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:27.867 ************************************ 00:30:27.867 END TEST denied 00:30:27.867 12:52:47 -- common/autotest_common.sh@10 -- # set +x 00:30:27.867 ************************************ 00:30:27.867 12:52:47 -- setup/acl.sh@55 -- # run_test allowed allowed 00:30:27.867 12:52:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:27.867 12:52:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:27.867 12:52:47 -- common/autotest_common.sh@10 -- # set +x 00:30:27.867 ************************************ 00:30:27.867 START TEST allowed 00:30:27.867 ************************************ 00:30:27.867 12:52:47 -- common/autotest_common.sh@1104 -- # allowed 00:30:27.867 12:52:47 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:30:27.867 12:52:47 -- setup/acl.sh@45 -- # setup output config 00:30:27.867 12:52:47 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:30:27.867 12:52:47 -- setup/common.sh@9 -- # [[ output == output ]] 00:30:27.867 12:52:47 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:30:28.803 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:30:28.803 12:52:47 -- setup/acl.sh@47 -- # verify 0000:00:07.0 00:30:28.803 12:52:47 -- setup/acl.sh@28 -- # local dev driver 00:30:28.803 12:52:47 -- setup/acl.sh@30 -- # for dev in "$@" 00:30:28.803 12:52:47 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:07.0 ]] 00:30:28.803 12:52:47 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:07.0/driver 00:30:28.803 12:52:47 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:30:28.803 12:52:47 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:30:28.803 12:52:47 -- setup/acl.sh@48 -- # setup reset 00:30:28.803 12:52:47 -- setup/common.sh@9 -- # [[ reset == output ]] 00:30:28.803 12:52:47 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:30:29.370 00:30:29.370 real 0m1.524s 00:30:29.370 user 0m0.659s 00:30:29.370 sys 0m0.862s 00:30:29.370 12:52:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:29.370 12:52:48 -- common/autotest_common.sh@10 -- # set +x 00:30:29.370 ************************************ 00:30:29.370 END TEST allowed 00:30:29.370 ************************************ 00:30:29.370 00:30:29.370 real 0m4.233s 00:30:29.370 user 0m1.805s 00:30:29.370 sys 0m2.404s 00:30:29.370 12:52:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:29.370 12:52:48 -- common/autotest_common.sh@10 -- # set +x 00:30:29.370 ************************************ 00:30:29.370 END TEST acl 00:30:29.370 ************************************ 00:30:29.370 12:52:48 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:30:29.370 12:52:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:29.370 12:52:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:29.370 12:52:48 -- common/autotest_common.sh@10 -- # set +x 00:30:29.370 ************************************ 00:30:29.370 START TEST hugepages 00:30:29.370 ************************************ 00:30:29.370 12:52:48 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:30:29.632 * Looking for test storage... 00:30:29.632 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:30:29.632 12:52:48 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:30:29.632 12:52:48 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:30:29.632 12:52:48 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:30:29.632 12:52:48 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:30:29.632 12:52:48 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:30:29.632 12:52:48 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:30:29.632 12:52:48 -- setup/common.sh@17 -- # local get=Hugepagesize 00:30:29.632 12:52:48 -- setup/common.sh@18 -- # local node= 00:30:29.632 12:52:48 -- setup/common.sh@19 -- # local var val 00:30:29.632 12:52:48 -- setup/common.sh@20 -- # local mem_f mem 00:30:29.632 12:52:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:30:29.632 12:52:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:30:29.632 12:52:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:30:29.632 12:52:48 -- setup/common.sh@28 -- # mapfile -t mem 00:30:29.632 12:52:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:30:29.632 12:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:30:29.632 12:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:30:29.632 12:52:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 4759684 kB' 'MemAvailable: 7371988 kB' 'Buffers: 2436 kB' 'Cached: 2814748 kB' 'SwapCached: 0 kB' 'Active: 476144 kB' 'Inactive: 2444452 kB' 'Active(anon): 113904 kB' 'Inactive(anon): 0 kB' 'Active(file): 362240 kB' 'Inactive(file): 2444452 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 104784 kB' 'Mapped: 48636 kB' 'Shmem: 10492 kB' 'KReclaimable: 85104 kB' 'Slab: 164988 kB' 'SReclaimable: 85104 kB' 'SUnreclaim: 79884 kB' 'KernelStack: 6668 kB' 'PageTables: 4464 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412440 kB' 'Committed_AS: 335680 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54948 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:30:29.632 12:52:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:30:29.632 12:52:48 -- setup/common.sh@32 -- # continue 00:30:29.632 12:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:30:29.632 12:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:30:29.632 12:52:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:30:29.632 12:52:48 -- setup/common.sh@32 -- # continue 00:30:29.632 12:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:30:29.632 12:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:30:29.632 12:52:48 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:30:29.632 12:52:48 -- setup/common.sh@32 -- # continue 00:30:29.632 12:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:30:29.632 12:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:30:29.632 12:52:48 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:30:29.632 12:52:48 -- setup/common.sh@32 -- # continue 00:30:29.632 12:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:30:29.632 12:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:30:29.632 12:52:48 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:30:29.632 12:52:48 -- setup/common.sh@32 -- # continue 00:30:29.632 12:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:30:29.632 12:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:30:29.632 12:52:48 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:30:29.632 12:52:48 -- setup/common.sh@32 -- # continue 00:30:29.632 12:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:30:29.632 12:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:30:29.632 12:52:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:30:29.632 12:52:48 -- setup/common.sh@32 -- # continue 00:30:29.632 12:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:30:29.632 12:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:30:29.632 12:52:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:30:29.632 12:52:48 -- setup/common.sh@32 -- # continue 00:30:29.632 12:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:30:29.632 12:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:30:29.632 12:52:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:30:29.632 12:52:48 -- setup/common.sh@32 -- # continue 00:30:29.632 12:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:30:29.632 12:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:30:29.632 12:52:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:30:29.632 12:52:48 -- setup/common.sh@32 -- # continue 00:30:29.632 12:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:30:29.632 12:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:30:29.632 12:52:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:30:29.632 12:52:48 -- setup/common.sh@32 -- # continue 00:30:29.632 12:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:30:29.632 12:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:30:29.632 12:52:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:30:29.632 12:52:48 -- setup/common.sh@32 -- # continue 00:30:29.632 12:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:30:29.632 12:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:30:29.632 12:52:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:30:29.632 12:52:48 -- setup/common.sh@32 -- # continue 00:30:29.632 12:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:30:29.632 12:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:30:29.632 12:52:48 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:30:29.632 12:52:48 -- setup/common.sh@32 -- # continue 00:30:29.632 12:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:30:29.632 12:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:30:29.632 12:52:48 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:30:29.632 12:52:48 -- setup/common.sh@32 -- # continue 00:30:29.632 12:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:30:29.632 12:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:30:29.632 12:52:48 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:30:29.632 12:52:48 -- setup/common.sh@32 -- # continue 00:30:29.632 12:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:30:29.632 12:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:30:29.632 12:52:48 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:30:29.632 12:52:48 -- setup/common.sh@32 -- # continue 00:30:29.632 12:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:30:29.632 12:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:30:29.632 12:52:48 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:30:29.632 12:52:48 -- setup/common.sh@32 -- # continue 00:30:29.632 12:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:30:29.632 12:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:30:29.632 12:52:48 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:30:29.632 12:52:48 -- setup/common.sh@32 -- # continue 00:30:29.632 12:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:30:29.632 12:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:30:29.632 12:52:48 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:30:29.632 12:52:48 -- setup/common.sh@32 -- # continue 00:30:29.632 12:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:30:29.632 12:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:30:29.632 12:52:48 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:30:29.632 12:52:48 -- setup/common.sh@32 -- # continue 00:30:29.632 12:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:30:29.632 12:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:30:29.632 12:52:48 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:30:29.632 12:52:48 -- setup/common.sh@32 -- # continue 00:30:29.632 12:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:30:29.632 12:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:30:29.632 12:52:48 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:30:29.632 12:52:48 -- setup/common.sh@32 -- # continue 00:30:29.632 12:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:30:29.633 12:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:30:29.633 12:52:48 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:30:29.633 12:52:48 -- setup/common.sh@32 -- # continue 00:30:29.633 12:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:30:29.633 12:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:30:29.633 12:52:48 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:30:29.633 12:52:48 -- setup/common.sh@32 -- # continue 00:30:29.633 12:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:30:29.633 12:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:30:29.633 12:52:48 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:30:29.633 12:52:48 -- setup/common.sh@32 -- # continue 00:30:29.633 12:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:30:29.633 12:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:30:29.633 12:52:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:30:29.633 12:52:48 -- setup/common.sh@32 -- # continue 00:30:29.633 12:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:30:29.633 12:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:30:29.633 12:52:48 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:30:29.633 12:52:48 -- setup/common.sh@32 -- # continue 00:30:29.633 12:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:30:29.633 12:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:30:29.633 12:52:48 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:30:29.633 12:52:48 -- setup/common.sh@32 -- # continue 00:30:29.633 12:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:30:29.633 12:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:30:29.633 12:52:48 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:30:29.633 12:52:48 -- setup/common.sh@32 -- # continue 00:30:29.633 12:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:30:29.633 12:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:30:29.633 12:52:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:30:29.633 12:52:48 -- setup/common.sh@32 -- # continue 00:30:29.633 12:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:30:29.633 12:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:30:29.633 12:52:48 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:30:29.633 12:52:48 -- setup/common.sh@32 -- # continue 00:30:29.633 12:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:30:29.633 12:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:30:29.633 12:52:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:30:29.633 12:52:48 -- setup/common.sh@32 -- # continue 00:30:29.633 12:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:30:29.633 12:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:30:29.633 12:52:48 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:30:29.633 12:52:48 -- setup/common.sh@32 -- # continue 00:30:29.633 12:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:30:29.633 12:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:30:29.633 12:52:48 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:30:29.633 12:52:48 -- setup/common.sh@32 -- # continue 00:30:29.633 12:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:30:29.633 12:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:30:29.633 12:52:48 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:30:29.633 12:52:48 -- setup/common.sh@32 -- # continue 00:30:29.633 12:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:30:29.633 12:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:30:29.633 12:52:48 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:30:29.633 12:52:48 -- setup/common.sh@32 -- # continue 00:30:29.633 12:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:30:29.633 12:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:30:29.633 12:52:48 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:30:29.633 12:52:48 -- setup/common.sh@32 -- # continue 00:30:29.633 12:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:30:29.633 12:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:30:29.633 12:52:48 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:30:29.633 12:52:48 -- setup/common.sh@32 -- # continue 00:30:29.633 12:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:30:29.633 12:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:30:29.633 12:52:48 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:30:29.633 12:52:48 -- setup/common.sh@32 -- # continue 00:30:29.633 12:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:30:29.633 12:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:30:29.633 12:52:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:30:29.633 12:52:48 -- setup/common.sh@32 -- # continue 00:30:29.633 12:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:30:29.633 12:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:30:29.633 12:52:48 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:30:29.633 12:52:48 -- setup/common.sh@32 -- # continue 00:30:29.633 12:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:30:29.633 12:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:30:29.633 12:52:48 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:30:29.633 12:52:48 -- setup/common.sh@32 -- # continue 00:30:29.633 12:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:30:29.633 12:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:30:29.633 12:52:48 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:30:29.633 12:52:48 -- setup/common.sh@32 -- # continue 00:30:29.633 12:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:30:29.633 12:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:30:29.633 12:52:48 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:30:29.633 12:52:48 -- setup/common.sh@32 -- # continue 00:30:29.633 12:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:30:29.633 12:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:30:29.633 12:52:48 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:30:29.633 12:52:48 -- setup/common.sh@32 -- # continue 00:30:29.633 12:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:30:29.633 12:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:30:29.633 12:52:48 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:30:29.633 12:52:48 -- setup/common.sh@32 -- # continue 00:30:29.633 12:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:30:29.633 12:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:30:29.633 12:52:48 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:30:29.633 12:52:48 -- setup/common.sh@32 -- # continue 00:30:29.633 12:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:30:29.633 12:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:30:29.633 12:52:48 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:30:29.633 12:52:48 -- setup/common.sh@32 -- # continue 00:30:29.633 12:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:30:29.633 12:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:30:29.633 12:52:48 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:30:29.633 12:52:48 -- setup/common.sh@32 -- # continue 00:30:29.633 12:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:30:29.633 12:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:30:29.633 12:52:48 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:30:29.633 12:52:48 -- setup/common.sh@32 -- # continue 00:30:29.633 12:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:30:29.633 12:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:30:29.633 12:52:48 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:30:29.633 12:52:48 -- setup/common.sh@32 -- # continue 00:30:29.633 12:52:48 -- setup/common.sh@31 -- # IFS=': ' 00:30:29.633 12:52:48 -- setup/common.sh@31 -- # read -r var val _ 00:30:29.633 12:52:48 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:30:29.633 12:52:48 -- setup/common.sh@33 -- # echo 2048 00:30:29.633 12:52:48 -- setup/common.sh@33 -- # return 0 00:30:29.633 12:52:48 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:30:29.633 12:52:48 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:30:29.633 12:52:48 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:30:29.633 12:52:48 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:30:29.633 12:52:48 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:30:29.633 12:52:48 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:30:29.633 12:52:48 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:30:29.633 12:52:48 -- setup/hugepages.sh@207 -- # get_nodes 00:30:29.633 12:52:48 -- setup/hugepages.sh@27 -- # local node 00:30:29.633 12:52:48 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:30:29.633 12:52:48 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:30:29.633 12:52:48 -- setup/hugepages.sh@32 -- # no_nodes=1 00:30:29.633 12:52:48 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:30:29.633 12:52:48 -- setup/hugepages.sh@208 -- # clear_hp 00:30:29.633 12:52:48 -- setup/hugepages.sh@37 -- # local node hp 00:30:29.633 12:52:48 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:30:29.634 12:52:48 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:30:29.634 12:52:48 -- setup/hugepages.sh@41 -- # echo 0 00:30:29.634 12:52:48 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:30:29.634 12:52:48 -- setup/hugepages.sh@41 -- # echo 0 00:30:29.634 12:52:48 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:30:29.634 12:52:48 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:30:29.634 12:52:48 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:30:29.634 12:52:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:29.634 12:52:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:29.634 12:52:48 -- common/autotest_common.sh@10 -- # set +x 00:30:29.634 ************************************ 00:30:29.634 START TEST default_setup 00:30:29.634 ************************************ 00:30:29.634 12:52:48 -- common/autotest_common.sh@1104 -- # default_setup 00:30:29.634 12:52:48 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:30:29.634 12:52:48 -- setup/hugepages.sh@49 -- # local size=2097152 00:30:29.634 12:52:48 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:30:29.634 12:52:48 -- setup/hugepages.sh@51 -- # shift 00:30:29.634 12:52:48 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:30:29.634 12:52:48 -- setup/hugepages.sh@52 -- # local node_ids 00:30:29.634 12:52:48 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:30:29.634 12:52:48 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:30:29.634 12:52:48 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:30:29.634 12:52:48 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:30:29.634 12:52:48 -- setup/hugepages.sh@62 -- # local user_nodes 00:30:29.634 12:52:48 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:30:29.634 12:52:48 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:30:29.634 12:52:48 -- setup/hugepages.sh@67 -- # nodes_test=() 00:30:29.634 12:52:48 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:30:29.634 12:52:48 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:30:29.634 12:52:48 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:30:29.634 12:52:48 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:30:29.634 12:52:48 -- setup/hugepages.sh@73 -- # return 0 00:30:29.634 12:52:48 -- setup/hugepages.sh@137 -- # setup output 00:30:29.634 12:52:48 -- setup/common.sh@9 -- # [[ output == output ]] 00:30:29.634 12:52:48 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:30:30.198 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:30.198 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:30:30.458 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:30:30.458 12:52:49 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:30:30.458 12:52:49 -- setup/hugepages.sh@89 -- # local node 00:30:30.458 12:52:49 -- setup/hugepages.sh@90 -- # local sorted_t 00:30:30.458 12:52:49 -- setup/hugepages.sh@91 -- # local sorted_s 00:30:30.458 12:52:49 -- setup/hugepages.sh@92 -- # local surp 00:30:30.458 12:52:49 -- setup/hugepages.sh@93 -- # local resv 00:30:30.458 12:52:49 -- setup/hugepages.sh@94 -- # local anon 00:30:30.458 12:52:49 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:30:30.458 12:52:49 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:30:30.459 12:52:49 -- setup/common.sh@17 -- # local get=AnonHugePages 00:30:30.459 12:52:49 -- setup/common.sh@18 -- # local node= 00:30:30.459 12:52:49 -- setup/common.sh@19 -- # local var val 00:30:30.459 12:52:49 -- setup/common.sh@20 -- # local mem_f mem 00:30:30.459 12:52:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:30:30.459 12:52:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:30:30.459 12:52:49 -- setup/common.sh@25 -- # [[ -n '' ]] 00:30:30.459 12:52:49 -- setup/common.sh@28 -- # mapfile -t mem 00:30:30.459 12:52:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:30:30.459 12:52:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6850032 kB' 'MemAvailable: 9462188 kB' 'Buffers: 2436 kB' 'Cached: 2814744 kB' 'SwapCached: 0 kB' 'Active: 491940 kB' 'Inactive: 2444472 kB' 'Active(anon): 129700 kB' 'Inactive(anon): 0 kB' 'Active(file): 362240 kB' 'Inactive(file): 2444472 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 120848 kB' 'Mapped: 48880 kB' 'Shmem: 10468 kB' 'KReclaimable: 84764 kB' 'Slab: 164560 kB' 'SReclaimable: 84764 kB' 'SUnreclaim: 79796 kB' 'KernelStack: 6576 kB' 'PageTables: 4320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 351704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54964 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:30:30.459 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.459 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.459 12:52:49 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:30.459 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.459 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.459 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.459 12:52:49 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:30.459 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.459 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.459 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.459 12:52:49 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:30.459 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.459 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.459 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.459 12:52:49 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:30.459 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.459 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.459 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.459 12:52:49 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:30.459 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.459 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.459 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.459 12:52:49 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:30.459 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.459 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.459 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.459 12:52:49 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:30.459 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.459 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.459 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.459 12:52:49 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:30.459 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.459 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.459 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.459 12:52:49 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:30.459 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.459 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.459 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.459 12:52:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:30.459 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.459 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.459 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.459 12:52:49 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:30.459 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.459 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.459 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.459 12:52:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:30.459 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.459 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.459 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.459 12:52:49 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:30.459 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.459 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.459 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.459 12:52:49 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:30.459 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.459 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.459 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.459 12:52:49 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:30.459 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.459 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.459 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.459 12:52:49 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:30.459 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.459 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.459 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.459 12:52:49 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:30.459 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.459 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.459 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.459 12:52:49 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:30.459 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.459 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.459 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.459 12:52:49 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:30.459 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.459 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.459 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.459 12:52:49 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:30.459 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.459 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.459 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.459 12:52:49 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:30.459 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.459 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.459 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.459 12:52:49 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:30.459 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.459 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.459 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.459 12:52:49 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:30.459 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.459 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.459 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.459 12:52:49 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:30.459 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.459 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.459 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.459 12:52:49 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:30.459 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.459 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.459 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.459 12:52:49 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:30.459 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.459 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.459 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.459 12:52:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:30.459 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.460 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.460 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.460 12:52:49 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:30.460 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.460 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.460 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.460 12:52:49 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:30.460 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.460 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.460 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.460 12:52:49 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:30.460 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.460 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.460 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.460 12:52:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:30.460 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.460 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.460 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.460 12:52:49 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:30.460 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.460 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.460 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.460 12:52:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:30.460 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.460 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.460 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.460 12:52:49 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:30.460 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.460 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.460 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.460 12:52:49 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:30.460 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.460 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.460 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.460 12:52:49 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:30.460 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.460 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.460 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.460 12:52:49 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:30.460 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.460 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.460 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.460 12:52:49 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:30.460 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.460 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.460 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.460 12:52:49 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:30.460 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.460 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.460 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.460 12:52:49 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:30.460 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.460 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.460 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.460 12:52:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:30.460 12:52:49 -- setup/common.sh@33 -- # echo 0 00:30:30.460 12:52:49 -- setup/common.sh@33 -- # return 0 00:30:30.460 12:52:49 -- setup/hugepages.sh@97 -- # anon=0 00:30:30.460 12:52:49 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:30:30.460 12:52:49 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:30:30.460 12:52:49 -- setup/common.sh@18 -- # local node= 00:30:30.460 12:52:49 -- setup/common.sh@19 -- # local var val 00:30:30.460 12:52:49 -- setup/common.sh@20 -- # local mem_f mem 00:30:30.460 12:52:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:30:30.460 12:52:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:30:30.460 12:52:49 -- setup/common.sh@25 -- # [[ -n '' ]] 00:30:30.460 12:52:49 -- setup/common.sh@28 -- # mapfile -t mem 00:30:30.460 12:52:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:30:30.460 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.460 12:52:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6850032 kB' 'MemAvailable: 9462188 kB' 'Buffers: 2436 kB' 'Cached: 2814744 kB' 'SwapCached: 0 kB' 'Active: 491692 kB' 'Inactive: 2444472 kB' 'Active(anon): 129452 kB' 'Inactive(anon): 0 kB' 'Active(file): 362240 kB' 'Inactive(file): 2444472 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 120604 kB' 'Mapped: 48692 kB' 'Shmem: 10468 kB' 'KReclaimable: 84764 kB' 'Slab: 164564 kB' 'SReclaimable: 84764 kB' 'SUnreclaim: 79800 kB' 'KernelStack: 6592 kB' 'PageTables: 4364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 351704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54948 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:30:30.460 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.460 12:52:49 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.460 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.460 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.460 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.460 12:52:49 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.460 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.460 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.460 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.460 12:52:49 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.460 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.460 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.460 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.460 12:52:49 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.460 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.460 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.460 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.460 12:52:49 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.460 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.460 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.460 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.460 12:52:49 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.460 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.460 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.461 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.461 12:52:49 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.461 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.461 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.461 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.461 12:52:49 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.461 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.461 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.461 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.461 12:52:49 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.461 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.461 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.461 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.461 12:52:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.461 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.461 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.461 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.461 12:52:49 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.461 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.461 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.461 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.461 12:52:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.461 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.461 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.461 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.461 12:52:49 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.461 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.461 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.461 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.461 12:52:49 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.461 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.461 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.461 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.461 12:52:49 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.461 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.461 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.461 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.461 12:52:49 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.461 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.461 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.461 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.461 12:52:49 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.461 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.461 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.461 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.461 12:52:49 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.461 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.461 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.461 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.461 12:52:49 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.461 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.461 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.461 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.461 12:52:49 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.461 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.461 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.461 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.461 12:52:49 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.461 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.461 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.461 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.461 12:52:49 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.461 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.461 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.461 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.461 12:52:49 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.461 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.461 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.461 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.461 12:52:49 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.461 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.461 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.461 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.461 12:52:49 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.461 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.461 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.461 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.461 12:52:49 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.461 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.461 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.461 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.461 12:52:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.461 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.461 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.461 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.461 12:52:49 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.461 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.461 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.461 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.461 12:52:49 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.461 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.461 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.461 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.461 12:52:49 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.461 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.461 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.461 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.461 12:52:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.461 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.461 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.461 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.461 12:52:49 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.461 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.461 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.461 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.461 12:52:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.461 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.461 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.461 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.462 12:52:49 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.462 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.462 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.462 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.462 12:52:49 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.462 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.462 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.462 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.462 12:52:49 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.462 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.462 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.462 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.462 12:52:49 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.462 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.462 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.462 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.462 12:52:49 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.462 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.462 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.462 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.462 12:52:49 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.462 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.462 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.462 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.462 12:52:49 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.462 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.462 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.462 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.462 12:52:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.462 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.462 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.462 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.462 12:52:49 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.462 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.462 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.462 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.462 12:52:49 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.462 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.462 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.462 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.462 12:52:49 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.462 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.462 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.462 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.462 12:52:49 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.462 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.462 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.462 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.462 12:52:49 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.462 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.462 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.462 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.462 12:52:49 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.462 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.462 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.462 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.462 12:52:49 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.462 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.462 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.462 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.462 12:52:49 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.462 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.462 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.462 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.462 12:52:49 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.462 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.462 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.462 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.462 12:52:49 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.462 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.462 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.462 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.462 12:52:49 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.462 12:52:49 -- setup/common.sh@33 -- # echo 0 00:30:30.462 12:52:49 -- setup/common.sh@33 -- # return 0 00:30:30.462 12:52:49 -- setup/hugepages.sh@99 -- # surp=0 00:30:30.462 12:52:49 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:30:30.462 12:52:49 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:30:30.462 12:52:49 -- setup/common.sh@18 -- # local node= 00:30:30.462 12:52:49 -- setup/common.sh@19 -- # local var val 00:30:30.462 12:52:49 -- setup/common.sh@20 -- # local mem_f mem 00:30:30.462 12:52:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:30:30.462 12:52:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:30:30.462 12:52:49 -- setup/common.sh@25 -- # [[ -n '' ]] 00:30:30.462 12:52:49 -- setup/common.sh@28 -- # mapfile -t mem 00:30:30.462 12:52:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:30:30.462 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.462 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.462 12:52:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6850032 kB' 'MemAvailable: 9462188 kB' 'Buffers: 2436 kB' 'Cached: 2814744 kB' 'SwapCached: 0 kB' 'Active: 491716 kB' 'Inactive: 2444472 kB' 'Active(anon): 129476 kB' 'Inactive(anon): 0 kB' 'Active(file): 362240 kB' 'Inactive(file): 2444472 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 120612 kB' 'Mapped: 48692 kB' 'Shmem: 10468 kB' 'KReclaimable: 84764 kB' 'Slab: 164560 kB' 'SReclaimable: 84764 kB' 'SUnreclaim: 79796 kB' 'KernelStack: 6592 kB' 'PageTables: 4364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 351704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54932 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:30:30.462 12:52:49 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:30.462 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.462 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.462 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.463 12:52:49 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:30.463 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.463 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.463 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.463 12:52:49 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:30.463 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.463 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.463 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.463 12:52:49 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:30.463 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.463 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.463 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.463 12:52:49 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:30.463 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.463 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.463 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.463 12:52:49 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:30.463 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.463 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.463 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.463 12:52:49 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:30.463 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.463 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.463 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.463 12:52:49 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:30.463 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.463 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.463 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.463 12:52:49 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:30.463 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.463 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.463 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.463 12:52:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:30.463 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.463 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.463 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.463 12:52:49 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:30.463 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.463 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.463 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.463 12:52:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:30.463 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.463 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.463 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.463 12:52:49 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:30.463 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.463 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.463 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.463 12:52:49 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:30.463 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.463 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.463 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.463 12:52:49 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:30.463 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.463 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.463 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.463 12:52:49 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:30.463 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.463 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.463 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.463 12:52:49 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:30.463 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.463 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.463 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.463 12:52:49 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:30.463 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.463 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.463 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.463 12:52:49 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:30.463 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.463 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.463 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.463 12:52:49 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:30.463 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.463 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.463 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.463 12:52:49 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:30.463 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.463 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.463 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.463 12:52:49 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:30.463 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.463 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.463 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.463 12:52:49 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:30.463 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.463 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.463 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.463 12:52:49 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:30.463 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.463 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.463 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.463 12:52:49 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:30.463 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.463 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.463 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.463 12:52:49 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:30.463 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.463 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.463 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.463 12:52:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:30.463 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.463 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.463 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.463 12:52:49 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:30.463 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.463 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.463 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.464 12:52:49 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:30.464 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.464 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.464 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.464 12:52:49 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:30.464 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.464 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.464 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.464 12:52:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:30.464 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.464 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.464 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.464 12:52:49 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:30.464 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.464 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.464 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.464 12:52:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:30.464 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.464 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.464 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.464 12:52:49 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:30.464 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.464 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.464 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.464 12:52:49 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:30.464 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.464 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.464 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.464 12:52:49 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:30.464 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.464 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.464 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.464 12:52:49 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:30.464 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.464 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.464 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.464 12:52:49 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:30.464 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.464 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.464 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.464 12:52:49 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:30.464 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.464 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.464 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.464 12:52:49 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:30.464 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.464 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.464 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.464 12:52:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:30.464 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.464 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.464 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.464 12:52:49 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:30.464 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.464 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.464 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.464 12:52:49 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:30.464 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.464 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.464 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.464 12:52:49 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:30.464 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.464 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.464 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.464 12:52:49 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:30.464 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.464 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.464 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.464 12:52:49 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:30.464 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.464 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.464 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.464 12:52:49 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:30.464 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.464 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.464 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.464 12:52:49 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:30.464 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.464 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.464 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.464 12:52:49 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:30.464 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.464 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.464 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.464 12:52:49 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:30.464 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.464 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.464 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.464 12:52:49 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:30.464 12:52:49 -- setup/common.sh@33 -- # echo 0 00:30:30.464 12:52:49 -- setup/common.sh@33 -- # return 0 00:30:30.464 12:52:49 -- setup/hugepages.sh@100 -- # resv=0 00:30:30.464 12:52:49 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:30:30.464 nr_hugepages=1024 00:30:30.464 resv_hugepages=0 00:30:30.464 surplus_hugepages=0 00:30:30.464 anon_hugepages=0 00:30:30.464 12:52:49 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:30:30.464 12:52:49 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:30:30.464 12:52:49 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:30:30.464 12:52:49 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:30:30.464 12:52:49 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:30:30.464 12:52:49 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:30:30.464 12:52:49 -- setup/common.sh@17 -- # local get=HugePages_Total 00:30:30.464 12:52:49 -- setup/common.sh@18 -- # local node= 00:30:30.464 12:52:49 -- setup/common.sh@19 -- # local var val 00:30:30.464 12:52:49 -- setup/common.sh@20 -- # local mem_f mem 00:30:30.464 12:52:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:30:30.464 12:52:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:30:30.465 12:52:49 -- setup/common.sh@25 -- # [[ -n '' ]] 00:30:30.465 12:52:49 -- setup/common.sh@28 -- # mapfile -t mem 00:30:30.465 12:52:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:30:30.465 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.465 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.465 12:52:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6849780 kB' 'MemAvailable: 9461936 kB' 'Buffers: 2436 kB' 'Cached: 2814744 kB' 'SwapCached: 0 kB' 'Active: 491968 kB' 'Inactive: 2444472 kB' 'Active(anon): 129728 kB' 'Inactive(anon): 0 kB' 'Active(file): 362240 kB' 'Inactive(file): 2444472 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 120888 kB' 'Mapped: 48692 kB' 'Shmem: 10468 kB' 'KReclaimable: 84764 kB' 'Slab: 164560 kB' 'SReclaimable: 84764 kB' 'SUnreclaim: 79796 kB' 'KernelStack: 6608 kB' 'PageTables: 4412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 351704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54948 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:30:30.465 12:52:49 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:30.465 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.465 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.465 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.465 12:52:49 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:30.465 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.465 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.465 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.465 12:52:49 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:30.465 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.465 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.465 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.465 12:52:49 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:30.465 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.465 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.465 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.465 12:52:49 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:30.465 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.465 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.465 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.465 12:52:49 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:30.465 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.465 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.465 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.465 12:52:49 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:30.465 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.465 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.465 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.465 12:52:49 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:30.465 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.465 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.465 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.465 12:52:49 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:30.465 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.465 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.465 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.465 12:52:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:30.465 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.465 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.465 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.465 12:52:49 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:30.465 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.465 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.465 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.465 12:52:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:30.465 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.465 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.465 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.465 12:52:49 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:30.465 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.465 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.465 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.465 12:52:49 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:30.465 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.465 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.465 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.465 12:52:49 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:30.465 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.465 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.465 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.465 12:52:49 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:30.465 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.465 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.466 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.466 12:52:49 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:30.466 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.466 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.466 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.466 12:52:49 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:30.466 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.466 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.466 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.466 12:52:49 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:30.466 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.466 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.466 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.466 12:52:49 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:30.466 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.466 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.466 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.466 12:52:49 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:30.466 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.466 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.466 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.466 12:52:49 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:30.466 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.466 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.466 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.466 12:52:49 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:30.466 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.466 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.466 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.466 12:52:49 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:30.466 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.466 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.466 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.466 12:52:49 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:30.466 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.466 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.466 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.466 12:52:49 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:30.466 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.466 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.466 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.466 12:52:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:30.466 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.466 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.466 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.466 12:52:49 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:30.466 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.466 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.466 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.466 12:52:49 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:30.466 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.466 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.466 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.466 12:52:49 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:30.466 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.466 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.466 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.466 12:52:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:30.466 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.466 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.466 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.466 12:52:49 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:30.466 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.466 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.466 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.466 12:52:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:30.466 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.466 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.466 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.466 12:52:49 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:30.466 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.466 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.466 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.466 12:52:49 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:30.466 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.466 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.466 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.466 12:52:49 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:30.466 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.466 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.466 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.466 12:52:49 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:30.466 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.466 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.466 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.466 12:52:49 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:30.466 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.466 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.466 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.466 12:52:49 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:30.466 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.466 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.466 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.466 12:52:49 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:30.466 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.466 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.466 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.466 12:52:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:30.466 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.466 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.466 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.466 12:52:49 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:30.466 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.466 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.466 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.466 12:52:49 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:30.466 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.466 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.466 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.466 12:52:49 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:30.466 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.466 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.466 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.466 12:52:49 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:30.466 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.466 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.466 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.466 12:52:49 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:30.466 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.466 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.466 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.466 12:52:49 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:30.466 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.466 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.466 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.466 12:52:49 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:30.466 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.466 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.466 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.466 12:52:49 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:30.466 12:52:49 -- setup/common.sh@33 -- # echo 1024 00:30:30.466 12:52:49 -- setup/common.sh@33 -- # return 0 00:30:30.466 12:52:49 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:30:30.466 12:52:49 -- setup/hugepages.sh@112 -- # get_nodes 00:30:30.466 12:52:49 -- setup/hugepages.sh@27 -- # local node 00:30:30.466 12:52:49 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:30:30.466 12:52:49 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:30:30.466 12:52:49 -- setup/hugepages.sh@32 -- # no_nodes=1 00:30:30.466 12:52:49 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:30:30.466 12:52:49 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:30:30.466 12:52:49 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:30:30.466 12:52:49 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:30:30.466 12:52:49 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:30:30.466 12:52:49 -- setup/common.sh@18 -- # local node=0 00:30:30.466 12:52:49 -- setup/common.sh@19 -- # local var val 00:30:30.466 12:52:49 -- setup/common.sh@20 -- # local mem_f mem 00:30:30.466 12:52:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:30:30.466 12:52:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:30:30.466 12:52:49 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:30:30.466 12:52:49 -- setup/common.sh@28 -- # mapfile -t mem 00:30:30.466 12:52:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:30:30.466 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.466 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.467 12:52:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6849780 kB' 'MemUsed: 5392200 kB' 'SwapCached: 0 kB' 'Active: 491632 kB' 'Inactive: 2444472 kB' 'Active(anon): 129392 kB' 'Inactive(anon): 0 kB' 'Active(file): 362240 kB' 'Inactive(file): 2444472 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'FilePages: 2817180 kB' 'Mapped: 48692 kB' 'AnonPages: 120592 kB' 'Shmem: 10468 kB' 'KernelStack: 6576 kB' 'PageTables: 4308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 84764 kB' 'Slab: 164540 kB' 'SReclaimable: 84764 kB' 'SUnreclaim: 79776 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:30:30.467 12:52:49 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.467 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.467 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.467 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.467 12:52:49 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.467 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.467 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.467 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.467 12:52:49 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.467 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.467 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.467 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.467 12:52:49 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.467 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.467 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.467 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.467 12:52:49 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.467 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.467 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.467 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.467 12:52:49 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.467 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.467 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.467 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.467 12:52:49 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.726 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.726 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.726 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.726 12:52:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.726 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.726 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.726 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.726 12:52:49 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.726 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.726 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.726 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.726 12:52:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.726 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.726 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.726 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.726 12:52:49 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.726 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.726 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.726 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.726 12:52:49 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.726 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.726 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.726 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.726 12:52:49 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.726 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.726 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.726 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.726 12:52:49 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.726 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.726 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.726 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.726 12:52:49 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.726 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.726 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.726 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.726 12:52:49 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.726 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.726 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.726 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.726 12:52:49 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.726 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.726 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.726 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.726 12:52:49 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.726 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.726 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.726 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.726 12:52:49 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.726 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.726 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.726 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.726 12:52:49 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.726 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.726 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.726 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.726 12:52:49 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.726 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.726 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.726 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.726 12:52:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.726 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.726 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.726 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.726 12:52:49 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.726 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.726 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.726 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.726 12:52:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.726 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.726 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.726 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.726 12:52:49 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.726 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.726 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.726 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.726 12:52:49 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.726 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.726 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.726 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.726 12:52:49 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.726 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.726 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.726 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.726 12:52:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.726 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.726 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.726 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.726 12:52:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.726 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.726 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.726 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.726 12:52:49 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.726 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.726 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.726 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.726 12:52:49 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.726 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.726 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.726 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.726 12:52:49 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.726 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.726 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.726 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.726 12:52:49 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.726 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.726 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.726 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.726 12:52:49 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.726 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.726 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.726 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.726 12:52:49 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.726 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.726 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.726 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.726 12:52:49 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.726 12:52:49 -- setup/common.sh@32 -- # continue 00:30:30.726 12:52:49 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.726 12:52:49 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.726 12:52:49 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.727 12:52:49 -- setup/common.sh@33 -- # echo 0 00:30:30.727 12:52:49 -- setup/common.sh@33 -- # return 0 00:30:30.727 12:52:49 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:30:30.727 12:52:49 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:30:30.727 12:52:49 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:30:30.727 node0=1024 expecting 1024 00:30:30.727 12:52:49 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:30:30.727 12:52:49 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:30:30.727 12:52:49 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:30:30.727 00:30:30.727 real 0m1.023s 00:30:30.727 user 0m0.489s 00:30:30.727 sys 0m0.472s 00:30:30.727 12:52:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:30.727 12:52:49 -- common/autotest_common.sh@10 -- # set +x 00:30:30.727 ************************************ 00:30:30.727 END TEST default_setup 00:30:30.727 ************************************ 00:30:30.727 12:52:49 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:30:30.727 12:52:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:30.727 12:52:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:30.727 12:52:49 -- common/autotest_common.sh@10 -- # set +x 00:30:30.727 ************************************ 00:30:30.727 START TEST per_node_1G_alloc 00:30:30.727 ************************************ 00:30:30.727 12:52:49 -- common/autotest_common.sh@1104 -- # per_node_1G_alloc 00:30:30.727 12:52:49 -- setup/hugepages.sh@143 -- # local IFS=, 00:30:30.727 12:52:49 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:30:30.727 12:52:49 -- setup/hugepages.sh@49 -- # local size=1048576 00:30:30.727 12:52:49 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:30:30.727 12:52:49 -- setup/hugepages.sh@51 -- # shift 00:30:30.727 12:52:49 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:30:30.727 12:52:49 -- setup/hugepages.sh@52 -- # local node_ids 00:30:30.727 12:52:49 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:30:30.727 12:52:49 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:30:30.727 12:52:49 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:30:30.727 12:52:49 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:30:30.727 12:52:49 -- setup/hugepages.sh@62 -- # local user_nodes 00:30:30.727 12:52:49 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:30:30.727 12:52:49 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:30:30.727 12:52:49 -- setup/hugepages.sh@67 -- # nodes_test=() 00:30:30.727 12:52:49 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:30:30.727 12:52:49 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:30:30.727 12:52:49 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:30:30.727 12:52:49 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:30:30.727 12:52:49 -- setup/hugepages.sh@73 -- # return 0 00:30:30.727 12:52:49 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:30:30.727 12:52:49 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:30:30.727 12:52:49 -- setup/hugepages.sh@146 -- # setup output 00:30:30.727 12:52:49 -- setup/common.sh@9 -- # [[ output == output ]] 00:30:30.727 12:52:49 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:30:30.987 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:30.987 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:30:30.987 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:30:30.987 12:52:50 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:30:30.987 12:52:50 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:30:30.987 12:52:50 -- setup/hugepages.sh@89 -- # local node 00:30:30.987 12:52:50 -- setup/hugepages.sh@90 -- # local sorted_t 00:30:30.987 12:52:50 -- setup/hugepages.sh@91 -- # local sorted_s 00:30:30.987 12:52:50 -- setup/hugepages.sh@92 -- # local surp 00:30:30.987 12:52:50 -- setup/hugepages.sh@93 -- # local resv 00:30:30.987 12:52:50 -- setup/hugepages.sh@94 -- # local anon 00:30:30.987 12:52:50 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:30:30.987 12:52:50 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:30:30.987 12:52:50 -- setup/common.sh@17 -- # local get=AnonHugePages 00:30:30.987 12:52:50 -- setup/common.sh@18 -- # local node= 00:30:30.987 12:52:50 -- setup/common.sh@19 -- # local var val 00:30:30.987 12:52:50 -- setup/common.sh@20 -- # local mem_f mem 00:30:30.987 12:52:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:30:30.987 12:52:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:30:30.987 12:52:50 -- setup/common.sh@25 -- # [[ -n '' ]] 00:30:30.987 12:52:50 -- setup/common.sh@28 -- # mapfile -t mem 00:30:30.987 12:52:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:30:30.987 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.987 12:52:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7897152 kB' 'MemAvailable: 10509308 kB' 'Buffers: 2436 kB' 'Cached: 2814744 kB' 'SwapCached: 0 kB' 'Active: 492160 kB' 'Inactive: 2444472 kB' 'Active(anon): 129920 kB' 'Inactive(anon): 0 kB' 'Active(file): 362240 kB' 'Inactive(file): 2444472 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 121108 kB' 'Mapped: 49016 kB' 'Shmem: 10468 kB' 'KReclaimable: 84764 kB' 'Slab: 164568 kB' 'SReclaimable: 84764 kB' 'SUnreclaim: 79804 kB' 'KernelStack: 6580 kB' 'PageTables: 4392 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 351704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54948 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:30:30.987 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.987 12:52:50 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:30.987 12:52:50 -- setup/common.sh@32 -- # continue 00:30:30.987 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.987 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.987 12:52:50 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:30.987 12:52:50 -- setup/common.sh@32 -- # continue 00:30:30.987 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.987 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.987 12:52:50 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:30.987 12:52:50 -- setup/common.sh@32 -- # continue 00:30:30.987 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.987 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.987 12:52:50 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:30.987 12:52:50 -- setup/common.sh@32 -- # continue 00:30:30.987 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.987 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.987 12:52:50 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:30.987 12:52:50 -- setup/common.sh@32 -- # continue 00:30:30.987 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.987 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.987 12:52:50 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:30.987 12:52:50 -- setup/common.sh@32 -- # continue 00:30:30.987 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.987 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.987 12:52:50 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:30.987 12:52:50 -- setup/common.sh@32 -- # continue 00:30:30.987 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.987 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.987 12:52:50 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:30.987 12:52:50 -- setup/common.sh@32 -- # continue 00:30:30.987 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.987 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.987 12:52:50 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:30.987 12:52:50 -- setup/common.sh@32 -- # continue 00:30:30.987 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.987 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.987 12:52:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:30.987 12:52:50 -- setup/common.sh@32 -- # continue 00:30:30.987 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.987 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.987 12:52:50 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:30.987 12:52:50 -- setup/common.sh@32 -- # continue 00:30:30.987 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.987 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.987 12:52:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:30.987 12:52:50 -- setup/common.sh@32 -- # continue 00:30:30.987 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.987 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.987 12:52:50 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:30.987 12:52:50 -- setup/common.sh@32 -- # continue 00:30:30.987 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.987 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.987 12:52:50 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:30.987 12:52:50 -- setup/common.sh@32 -- # continue 00:30:30.987 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.987 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.987 12:52:50 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:30.987 12:52:50 -- setup/common.sh@32 -- # continue 00:30:30.987 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.987 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.987 12:52:50 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:30.987 12:52:50 -- setup/common.sh@32 -- # continue 00:30:30.987 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.987 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.987 12:52:50 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:30.987 12:52:50 -- setup/common.sh@32 -- # continue 00:30:30.987 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.987 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.987 12:52:50 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:30.987 12:52:50 -- setup/common.sh@32 -- # continue 00:30:30.987 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.987 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.987 12:52:50 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:30.987 12:52:50 -- setup/common.sh@32 -- # continue 00:30:30.987 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.987 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.987 12:52:50 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:30.987 12:52:50 -- setup/common.sh@32 -- # continue 00:30:30.987 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.987 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.987 12:52:50 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:30.987 12:52:50 -- setup/common.sh@32 -- # continue 00:30:30.987 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.987 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.987 12:52:50 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:30.987 12:52:50 -- setup/common.sh@32 -- # continue 00:30:30.987 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.987 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.987 12:52:50 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:30.987 12:52:50 -- setup/common.sh@32 -- # continue 00:30:30.987 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.988 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.988 12:52:50 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:30.988 12:52:50 -- setup/common.sh@32 -- # continue 00:30:30.988 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.988 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.988 12:52:50 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:30.988 12:52:50 -- setup/common.sh@32 -- # continue 00:30:30.988 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.988 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.988 12:52:50 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:30.988 12:52:50 -- setup/common.sh@32 -- # continue 00:30:30.988 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.988 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.988 12:52:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:30.988 12:52:50 -- setup/common.sh@32 -- # continue 00:30:30.988 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.988 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.988 12:52:50 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:30.988 12:52:50 -- setup/common.sh@32 -- # continue 00:30:30.988 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.988 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.988 12:52:50 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:30.988 12:52:50 -- setup/common.sh@32 -- # continue 00:30:30.988 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.988 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.988 12:52:50 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:30.988 12:52:50 -- setup/common.sh@32 -- # continue 00:30:30.988 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.988 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.988 12:52:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:30.988 12:52:50 -- setup/common.sh@32 -- # continue 00:30:30.988 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.988 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.988 12:52:50 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:30.988 12:52:50 -- setup/common.sh@32 -- # continue 00:30:30.988 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.988 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.988 12:52:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:30.988 12:52:50 -- setup/common.sh@32 -- # continue 00:30:30.988 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.988 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.988 12:52:50 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:30.988 12:52:50 -- setup/common.sh@32 -- # continue 00:30:30.988 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.988 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.988 12:52:50 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:30.988 12:52:50 -- setup/common.sh@32 -- # continue 00:30:30.988 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.988 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.988 12:52:50 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:30.988 12:52:50 -- setup/common.sh@32 -- # continue 00:30:30.988 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.988 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.988 12:52:50 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:30.988 12:52:50 -- setup/common.sh@32 -- # continue 00:30:30.988 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.988 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.988 12:52:50 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:30.988 12:52:50 -- setup/common.sh@32 -- # continue 00:30:30.988 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.988 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.988 12:52:50 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:30.988 12:52:50 -- setup/common.sh@32 -- # continue 00:30:30.988 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.988 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.988 12:52:50 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:30.988 12:52:50 -- setup/common.sh@32 -- # continue 00:30:30.988 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.988 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.988 12:52:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:30.988 12:52:50 -- setup/common.sh@33 -- # echo 0 00:30:30.988 12:52:50 -- setup/common.sh@33 -- # return 0 00:30:30.988 12:52:50 -- setup/hugepages.sh@97 -- # anon=0 00:30:30.988 12:52:50 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:30:30.988 12:52:50 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:30:30.988 12:52:50 -- setup/common.sh@18 -- # local node= 00:30:30.988 12:52:50 -- setup/common.sh@19 -- # local var val 00:30:30.988 12:52:50 -- setup/common.sh@20 -- # local mem_f mem 00:30:30.988 12:52:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:30:30.988 12:52:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:30:30.988 12:52:50 -- setup/common.sh@25 -- # [[ -n '' ]] 00:30:30.988 12:52:50 -- setup/common.sh@28 -- # mapfile -t mem 00:30:30.988 12:52:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:30:30.988 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.988 12:52:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7897152 kB' 'MemAvailable: 10509308 kB' 'Buffers: 2436 kB' 'Cached: 2814744 kB' 'SwapCached: 0 kB' 'Active: 491732 kB' 'Inactive: 2444472 kB' 'Active(anon): 129492 kB' 'Inactive(anon): 0 kB' 'Active(file): 362240 kB' 'Inactive(file): 2444472 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 120584 kB' 'Mapped: 48812 kB' 'Shmem: 10468 kB' 'KReclaimable: 84764 kB' 'Slab: 164580 kB' 'SReclaimable: 84764 kB' 'SUnreclaim: 79816 kB' 'KernelStack: 6560 kB' 'PageTables: 4264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 351704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54916 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:30:30.988 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.988 12:52:50 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.988 12:52:50 -- setup/common.sh@32 -- # continue 00:30:30.988 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.988 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.988 12:52:50 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.988 12:52:50 -- setup/common.sh@32 -- # continue 00:30:30.988 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.988 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.988 12:52:50 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.988 12:52:50 -- setup/common.sh@32 -- # continue 00:30:30.988 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.988 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.988 12:52:50 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.988 12:52:50 -- setup/common.sh@32 -- # continue 00:30:30.988 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.988 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.988 12:52:50 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.988 12:52:50 -- setup/common.sh@32 -- # continue 00:30:30.988 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.988 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.988 12:52:50 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.988 12:52:50 -- setup/common.sh@32 -- # continue 00:30:30.988 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.988 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.988 12:52:50 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.988 12:52:50 -- setup/common.sh@32 -- # continue 00:30:30.988 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.988 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.988 12:52:50 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.988 12:52:50 -- setup/common.sh@32 -- # continue 00:30:30.988 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.988 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.988 12:52:50 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.988 12:52:50 -- setup/common.sh@32 -- # continue 00:30:30.988 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.988 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.988 12:52:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.988 12:52:50 -- setup/common.sh@32 -- # continue 00:30:30.988 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.988 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.988 12:52:50 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.988 12:52:50 -- setup/common.sh@32 -- # continue 00:30:30.988 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.988 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.988 12:52:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.988 12:52:50 -- setup/common.sh@32 -- # continue 00:30:30.988 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.988 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.988 12:52:50 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.988 12:52:50 -- setup/common.sh@32 -- # continue 00:30:30.988 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.988 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.988 12:52:50 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.988 12:52:50 -- setup/common.sh@32 -- # continue 00:30:30.989 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.989 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.989 12:52:50 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.989 12:52:50 -- setup/common.sh@32 -- # continue 00:30:30.989 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.989 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.989 12:52:50 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.989 12:52:50 -- setup/common.sh@32 -- # continue 00:30:30.989 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.989 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.989 12:52:50 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.989 12:52:50 -- setup/common.sh@32 -- # continue 00:30:30.989 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.989 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.989 12:52:50 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.989 12:52:50 -- setup/common.sh@32 -- # continue 00:30:30.989 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.989 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.989 12:52:50 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.989 12:52:50 -- setup/common.sh@32 -- # continue 00:30:30.989 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.989 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.989 12:52:50 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.989 12:52:50 -- setup/common.sh@32 -- # continue 00:30:30.989 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.989 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.989 12:52:50 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.989 12:52:50 -- setup/common.sh@32 -- # continue 00:30:30.989 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.989 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.989 12:52:50 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.989 12:52:50 -- setup/common.sh@32 -- # continue 00:30:30.989 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.989 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.989 12:52:50 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.989 12:52:50 -- setup/common.sh@32 -- # continue 00:30:30.989 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.989 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.989 12:52:50 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.989 12:52:50 -- setup/common.sh@32 -- # continue 00:30:30.989 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.989 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.989 12:52:50 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.989 12:52:50 -- setup/common.sh@32 -- # continue 00:30:30.989 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.989 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.989 12:52:50 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.989 12:52:50 -- setup/common.sh@32 -- # continue 00:30:30.989 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.989 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.989 12:52:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.989 12:52:50 -- setup/common.sh@32 -- # continue 00:30:30.989 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.989 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.989 12:52:50 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.989 12:52:50 -- setup/common.sh@32 -- # continue 00:30:30.989 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.989 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.989 12:52:50 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.989 12:52:50 -- setup/common.sh@32 -- # continue 00:30:30.989 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.989 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.989 12:52:50 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.989 12:52:50 -- setup/common.sh@32 -- # continue 00:30:30.989 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.989 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.989 12:52:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.989 12:52:50 -- setup/common.sh@32 -- # continue 00:30:30.989 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.989 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.989 12:52:50 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.989 12:52:50 -- setup/common.sh@32 -- # continue 00:30:30.989 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.989 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.989 12:52:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.989 12:52:50 -- setup/common.sh@32 -- # continue 00:30:30.989 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.989 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.989 12:52:50 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.989 12:52:50 -- setup/common.sh@32 -- # continue 00:30:30.989 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.989 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.989 12:52:50 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.989 12:52:50 -- setup/common.sh@32 -- # continue 00:30:30.989 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.989 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.989 12:52:50 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.989 12:52:50 -- setup/common.sh@32 -- # continue 00:30:30.989 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.989 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.989 12:52:50 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.989 12:52:50 -- setup/common.sh@32 -- # continue 00:30:30.989 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.989 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.989 12:52:50 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.989 12:52:50 -- setup/common.sh@32 -- # continue 00:30:30.989 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.989 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.989 12:52:50 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.989 12:52:50 -- setup/common.sh@32 -- # continue 00:30:30.989 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.989 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.989 12:52:50 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.989 12:52:50 -- setup/common.sh@32 -- # continue 00:30:30.989 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.989 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.989 12:52:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.989 12:52:50 -- setup/common.sh@32 -- # continue 00:30:30.989 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.989 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.989 12:52:50 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.989 12:52:50 -- setup/common.sh@32 -- # continue 00:30:30.989 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.989 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.989 12:52:50 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.989 12:52:50 -- setup/common.sh@32 -- # continue 00:30:30.989 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.989 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.989 12:52:50 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.989 12:52:50 -- setup/common.sh@32 -- # continue 00:30:30.989 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.989 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.989 12:52:50 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:30.989 12:52:50 -- setup/common.sh@32 -- # continue 00:30:30.989 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:30.989 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:30.989 12:52:50 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.249 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.249 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.249 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.249 12:52:50 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.249 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.249 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.249 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.249 12:52:50 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.249 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.249 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.249 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.249 12:52:50 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.249 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.249 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.249 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.249 12:52:50 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.249 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.249 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.249 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.249 12:52:50 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.249 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.249 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.249 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.249 12:52:50 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.249 12:52:50 -- setup/common.sh@33 -- # echo 0 00:30:31.249 12:52:50 -- setup/common.sh@33 -- # return 0 00:30:31.249 12:52:50 -- setup/hugepages.sh@99 -- # surp=0 00:30:31.249 12:52:50 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:30:31.249 12:52:50 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:30:31.249 12:52:50 -- setup/common.sh@18 -- # local node= 00:30:31.249 12:52:50 -- setup/common.sh@19 -- # local var val 00:30:31.249 12:52:50 -- setup/common.sh@20 -- # local mem_f mem 00:30:31.249 12:52:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:30:31.249 12:52:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:30:31.249 12:52:50 -- setup/common.sh@25 -- # [[ -n '' ]] 00:30:31.249 12:52:50 -- setup/common.sh@28 -- # mapfile -t mem 00:30:31.249 12:52:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:30:31.249 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.249 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.249 12:52:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7896900 kB' 'MemAvailable: 10509056 kB' 'Buffers: 2436 kB' 'Cached: 2814744 kB' 'SwapCached: 0 kB' 'Active: 491696 kB' 'Inactive: 2444472 kB' 'Active(anon): 129456 kB' 'Inactive(anon): 0 kB' 'Active(file): 362240 kB' 'Inactive(file): 2444472 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 120636 kB' 'Mapped: 48692 kB' 'Shmem: 10468 kB' 'KReclaimable: 84764 kB' 'Slab: 164564 kB' 'SReclaimable: 84764 kB' 'SUnreclaim: 79800 kB' 'KernelStack: 6592 kB' 'PageTables: 4368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 351704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:30:31.249 12:52:50 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:31.249 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.249 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.249 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.249 12:52:50 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:31.249 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.249 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.249 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.249 12:52:50 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:31.249 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.249 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.249 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.249 12:52:50 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:31.249 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.249 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.249 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.249 12:52:50 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:31.249 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.249 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.249 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.249 12:52:50 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:31.249 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.249 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.249 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.249 12:52:50 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:31.249 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.249 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.249 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.249 12:52:50 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:31.249 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.249 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.249 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.249 12:52:50 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:31.249 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.249 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.249 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.249 12:52:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:31.249 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.249 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.249 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.249 12:52:50 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:31.249 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.249 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.249 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.249 12:52:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:31.249 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.249 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.249 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.249 12:52:50 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:31.249 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.249 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.249 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.249 12:52:50 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:31.249 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.249 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.249 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.249 12:52:50 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:31.249 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.249 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.249 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.249 12:52:50 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:31.249 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.249 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.249 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.249 12:52:50 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:31.249 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.249 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.249 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.249 12:52:50 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:31.249 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.249 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.249 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.249 12:52:50 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:31.249 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.249 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.249 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.249 12:52:50 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:31.249 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.249 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.249 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.249 12:52:50 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:31.249 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.249 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.249 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.249 12:52:50 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:31.249 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.249 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.249 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.249 12:52:50 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:31.249 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.249 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.249 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.249 12:52:50 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:31.249 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.249 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.249 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.249 12:52:50 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:31.249 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.249 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.249 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.249 12:52:50 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:31.249 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.249 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.249 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.249 12:52:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:31.249 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.249 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.249 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.249 12:52:50 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:31.249 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.249 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.249 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.249 12:52:50 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:31.249 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.249 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.249 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.249 12:52:50 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:31.249 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.249 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.249 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.249 12:52:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:31.249 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.249 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.249 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.249 12:52:50 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:31.249 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.249 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.249 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.249 12:52:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:31.249 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.249 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.249 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.249 12:52:50 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:31.249 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.249 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.249 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.249 12:52:50 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:31.249 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.249 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.249 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.249 12:52:50 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:31.249 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.249 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:31.250 12:52:50 -- setup/common.sh@33 -- # echo 0 00:30:31.250 12:52:50 -- setup/common.sh@33 -- # return 0 00:30:31.250 nr_hugepages=512 00:30:31.250 resv_hugepages=0 00:30:31.250 surplus_hugepages=0 00:30:31.250 anon_hugepages=0 00:30:31.250 12:52:50 -- setup/hugepages.sh@100 -- # resv=0 00:30:31.250 12:52:50 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:30:31.250 12:52:50 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:30:31.250 12:52:50 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:30:31.250 12:52:50 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:30:31.250 12:52:50 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:30:31.250 12:52:50 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:30:31.250 12:52:50 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:30:31.250 12:52:50 -- setup/common.sh@17 -- # local get=HugePages_Total 00:30:31.250 12:52:50 -- setup/common.sh@18 -- # local node= 00:30:31.250 12:52:50 -- setup/common.sh@19 -- # local var val 00:30:31.250 12:52:50 -- setup/common.sh@20 -- # local mem_f mem 00:30:31.250 12:52:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:30:31.250 12:52:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:30:31.250 12:52:50 -- setup/common.sh@25 -- # [[ -n '' ]] 00:30:31.250 12:52:50 -- setup/common.sh@28 -- # mapfile -t mem 00:30:31.250 12:52:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.250 12:52:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7896900 kB' 'MemAvailable: 10509056 kB' 'Buffers: 2436 kB' 'Cached: 2814744 kB' 'SwapCached: 0 kB' 'Active: 491468 kB' 'Inactive: 2444472 kB' 'Active(anon): 129228 kB' 'Inactive(anon): 0 kB' 'Active(file): 362240 kB' 'Inactive(file): 2444472 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 120376 kB' 'Mapped: 48692 kB' 'Shmem: 10468 kB' 'KReclaimable: 84764 kB' 'Slab: 164564 kB' 'SReclaimable: 84764 kB' 'SUnreclaim: 79800 kB' 'KernelStack: 6592 kB' 'PageTables: 4368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 351704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54884 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:31.250 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.250 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.251 12:52:50 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:31.251 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.251 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.251 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.251 12:52:50 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:31.251 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.251 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.251 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.251 12:52:50 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:31.251 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.251 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.251 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.251 12:52:50 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:31.251 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.251 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.251 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.251 12:52:50 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:31.251 12:52:50 -- setup/common.sh@33 -- # echo 512 00:30:31.251 12:52:50 -- setup/common.sh@33 -- # return 0 00:30:31.251 12:52:50 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:30:31.251 12:52:50 -- setup/hugepages.sh@112 -- # get_nodes 00:30:31.251 12:52:50 -- setup/hugepages.sh@27 -- # local node 00:30:31.251 12:52:50 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:30:31.251 12:52:50 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:30:31.251 12:52:50 -- setup/hugepages.sh@32 -- # no_nodes=1 00:30:31.251 12:52:50 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:30:31.251 12:52:50 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:30:31.251 12:52:50 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:30:31.251 12:52:50 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:30:31.251 12:52:50 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:30:31.251 12:52:50 -- setup/common.sh@18 -- # local node=0 00:30:31.251 12:52:50 -- setup/common.sh@19 -- # local var val 00:30:31.251 12:52:50 -- setup/common.sh@20 -- # local mem_f mem 00:30:31.251 12:52:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:30:31.251 12:52:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:30:31.251 12:52:50 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:30:31.251 12:52:50 -- setup/common.sh@28 -- # mapfile -t mem 00:30:31.251 12:52:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:30:31.251 12:52:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7899052 kB' 'MemUsed: 4342928 kB' 'SwapCached: 0 kB' 'Active: 491648 kB' 'Inactive: 2444472 kB' 'Active(anon): 129408 kB' 'Inactive(anon): 0 kB' 'Active(file): 362240 kB' 'Inactive(file): 2444472 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'FilePages: 2817180 kB' 'Mapped: 48692 kB' 'AnonPages: 120516 kB' 'Shmem: 10468 kB' 'KernelStack: 6576 kB' 'PageTables: 4312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 84764 kB' 'Slab: 164564 kB' 'SReclaimable: 84764 kB' 'SUnreclaim: 79800 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:30:31.251 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.251 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.251 12:52:50 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.251 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.251 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.251 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.251 12:52:50 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.251 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.251 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.251 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.251 12:52:50 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.251 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.251 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.251 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.251 12:52:50 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.251 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.251 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.251 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.251 12:52:50 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.251 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.251 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.251 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.251 12:52:50 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.251 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.251 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.251 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.251 12:52:50 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.251 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.251 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.251 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.251 12:52:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.251 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.251 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.251 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.251 12:52:50 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.251 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.251 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.251 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.251 12:52:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.251 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.251 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.251 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.251 12:52:50 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.251 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.251 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.251 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.251 12:52:50 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.251 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.251 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.251 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.251 12:52:50 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.251 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.251 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.251 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.251 12:52:50 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.251 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.251 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.251 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.251 12:52:50 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.251 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.251 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.251 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.251 12:52:50 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.251 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.251 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.251 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.251 12:52:50 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.251 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.251 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.251 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.251 12:52:50 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.251 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.251 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.251 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.251 12:52:50 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.251 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.251 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.251 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.251 12:52:50 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.251 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.251 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.251 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.251 12:52:50 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.251 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.251 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.251 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.251 12:52:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.251 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.251 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.251 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.251 12:52:50 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.251 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.251 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.251 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.251 12:52:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.251 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.251 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.251 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.251 12:52:50 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.251 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.251 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.251 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.251 12:52:50 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.251 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.251 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.251 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.251 12:52:50 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.251 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.251 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.251 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.251 12:52:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.251 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.251 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.251 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.251 12:52:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.251 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.251 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.251 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.251 12:52:50 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.251 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.251 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.251 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.251 12:52:50 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.251 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.251 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.251 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.251 12:52:50 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.251 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.251 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.251 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.251 12:52:50 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.251 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.251 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.251 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.251 12:52:50 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.251 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.251 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.251 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.251 12:52:50 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.251 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.251 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.251 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.251 12:52:50 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.251 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.251 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.251 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.251 12:52:50 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.251 12:52:50 -- setup/common.sh@33 -- # echo 0 00:30:31.251 12:52:50 -- setup/common.sh@33 -- # return 0 00:30:31.251 12:52:50 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:30:31.251 12:52:50 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:30:31.251 12:52:50 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:30:31.251 12:52:50 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:30:31.251 node0=512 expecting 512 00:30:31.251 ************************************ 00:30:31.251 END TEST per_node_1G_alloc 00:30:31.251 ************************************ 00:30:31.251 12:52:50 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:30:31.251 12:52:50 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:30:31.251 00:30:31.251 real 0m0.580s 00:30:31.251 user 0m0.280s 00:30:31.251 sys 0m0.300s 00:30:31.251 12:52:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:31.251 12:52:50 -- common/autotest_common.sh@10 -- # set +x 00:30:31.251 12:52:50 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:30:31.251 12:52:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:31.251 12:52:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:31.251 12:52:50 -- common/autotest_common.sh@10 -- # set +x 00:30:31.251 ************************************ 00:30:31.251 START TEST even_2G_alloc 00:30:31.251 ************************************ 00:30:31.251 12:52:50 -- common/autotest_common.sh@1104 -- # even_2G_alloc 00:30:31.251 12:52:50 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:30:31.251 12:52:50 -- setup/hugepages.sh@49 -- # local size=2097152 00:30:31.251 12:52:50 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:30:31.251 12:52:50 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:30:31.251 12:52:50 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:30:31.251 12:52:50 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:30:31.251 12:52:50 -- setup/hugepages.sh@62 -- # user_nodes=() 00:30:31.251 12:52:50 -- setup/hugepages.sh@62 -- # local user_nodes 00:30:31.251 12:52:50 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:30:31.251 12:52:50 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:30:31.251 12:52:50 -- setup/hugepages.sh@67 -- # nodes_test=() 00:30:31.251 12:52:50 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:30:31.251 12:52:50 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:30:31.251 12:52:50 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:30:31.251 12:52:50 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:30:31.251 12:52:50 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:30:31.251 12:52:50 -- setup/hugepages.sh@83 -- # : 0 00:30:31.251 12:52:50 -- setup/hugepages.sh@84 -- # : 0 00:30:31.251 12:52:50 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:30:31.251 12:52:50 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:30:31.251 12:52:50 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:30:31.251 12:52:50 -- setup/hugepages.sh@153 -- # setup output 00:30:31.251 12:52:50 -- setup/common.sh@9 -- # [[ output == output ]] 00:30:31.251 12:52:50 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:30:31.509 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:31.509 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:30:31.509 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:30:31.769 12:52:50 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:30:31.769 12:52:50 -- setup/hugepages.sh@89 -- # local node 00:30:31.769 12:52:50 -- setup/hugepages.sh@90 -- # local sorted_t 00:30:31.769 12:52:50 -- setup/hugepages.sh@91 -- # local sorted_s 00:30:31.769 12:52:50 -- setup/hugepages.sh@92 -- # local surp 00:30:31.769 12:52:50 -- setup/hugepages.sh@93 -- # local resv 00:30:31.769 12:52:50 -- setup/hugepages.sh@94 -- # local anon 00:30:31.769 12:52:50 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:30:31.769 12:52:50 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:30:31.769 12:52:50 -- setup/common.sh@17 -- # local get=AnonHugePages 00:30:31.769 12:52:50 -- setup/common.sh@18 -- # local node= 00:30:31.769 12:52:50 -- setup/common.sh@19 -- # local var val 00:30:31.769 12:52:50 -- setup/common.sh@20 -- # local mem_f mem 00:30:31.769 12:52:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:30:31.769 12:52:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:30:31.769 12:52:50 -- setup/common.sh@25 -- # [[ -n '' ]] 00:30:31.769 12:52:50 -- setup/common.sh@28 -- # mapfile -t mem 00:30:31.769 12:52:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:30:31.769 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.769 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.769 12:52:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6847548 kB' 'MemAvailable: 9459712 kB' 'Buffers: 2436 kB' 'Cached: 2814744 kB' 'SwapCached: 0 kB' 'Active: 491932 kB' 'Inactive: 2444472 kB' 'Active(anon): 129692 kB' 'Inactive(anon): 0 kB' 'Active(file): 362240 kB' 'Inactive(file): 2444472 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 121084 kB' 'Mapped: 48872 kB' 'Shmem: 10468 kB' 'KReclaimable: 84780 kB' 'Slab: 164564 kB' 'SReclaimable: 84780 kB' 'SUnreclaim: 79784 kB' 'KernelStack: 6584 kB' 'PageTables: 4452 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 351704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54964 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:30:31.769 12:52:50 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:31.769 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.769 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.769 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.769 12:52:50 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:31.769 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.769 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.769 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.769 12:52:50 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:31.769 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.769 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.769 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.769 12:52:50 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:31.769 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.769 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.769 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.769 12:52:50 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:31.769 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.769 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.770 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.770 12:52:50 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:31.770 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.770 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.770 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.770 12:52:50 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:31.770 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.770 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.770 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.770 12:52:50 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:31.770 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.770 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.770 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.770 12:52:50 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:31.770 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.770 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.770 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.770 12:52:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:31.770 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.770 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.770 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.770 12:52:50 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:31.770 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.770 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.770 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.770 12:52:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:31.770 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.770 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.770 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.770 12:52:50 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:31.770 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.770 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.770 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.770 12:52:50 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:31.770 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.770 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.770 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.770 12:52:50 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:31.770 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.770 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.770 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.770 12:52:50 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:31.770 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.770 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.770 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.770 12:52:50 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:31.770 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.770 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.770 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.770 12:52:50 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:31.770 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.770 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.770 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.770 12:52:50 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:31.770 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.770 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.770 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.770 12:52:50 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:31.770 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.770 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.770 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.770 12:52:50 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:31.770 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.770 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.770 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.770 12:52:50 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:31.770 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.770 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.770 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.770 12:52:50 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:31.770 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.770 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.770 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.770 12:52:50 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:31.770 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.770 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.770 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.770 12:52:50 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:31.770 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.770 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.770 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.770 12:52:50 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:31.770 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.770 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.770 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.770 12:52:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:31.770 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.770 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.770 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.770 12:52:50 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:31.770 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.770 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.770 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.770 12:52:50 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:31.770 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.770 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.770 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.770 12:52:50 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:31.770 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.770 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.770 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.770 12:52:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:31.770 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.770 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.770 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.770 12:52:50 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:31.770 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.770 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.770 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.770 12:52:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:31.770 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.770 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.770 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.770 12:52:50 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:31.770 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.770 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.770 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.770 12:52:50 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:31.770 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.770 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.770 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.770 12:52:50 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:31.770 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.770 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.770 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.770 12:52:50 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:31.770 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.770 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.770 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.770 12:52:50 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:31.770 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.770 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.770 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.770 12:52:50 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:31.770 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.770 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.770 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.770 12:52:50 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:31.770 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.770 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.770 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.770 12:52:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:31.770 12:52:50 -- setup/common.sh@33 -- # echo 0 00:30:31.770 12:52:50 -- setup/common.sh@33 -- # return 0 00:30:31.770 12:52:50 -- setup/hugepages.sh@97 -- # anon=0 00:30:31.770 12:52:50 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:30:31.770 12:52:50 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:30:31.770 12:52:50 -- setup/common.sh@18 -- # local node= 00:30:31.770 12:52:50 -- setup/common.sh@19 -- # local var val 00:30:31.770 12:52:50 -- setup/common.sh@20 -- # local mem_f mem 00:30:31.770 12:52:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:30:31.770 12:52:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:30:31.770 12:52:50 -- setup/common.sh@25 -- # [[ -n '' ]] 00:30:31.770 12:52:50 -- setup/common.sh@28 -- # mapfile -t mem 00:30:31.770 12:52:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:30:31.770 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.771 12:52:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6847548 kB' 'MemAvailable: 9459712 kB' 'Buffers: 2436 kB' 'Cached: 2814744 kB' 'SwapCached: 0 kB' 'Active: 491456 kB' 'Inactive: 2444472 kB' 'Active(anon): 129216 kB' 'Inactive(anon): 0 kB' 'Active(file): 362240 kB' 'Inactive(file): 2444472 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 120620 kB' 'Mapped: 48692 kB' 'Shmem: 10468 kB' 'KReclaimable: 84780 kB' 'Slab: 164552 kB' 'SReclaimable: 84780 kB' 'SUnreclaim: 79772 kB' 'KernelStack: 6592 kB' 'PageTables: 4368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 351704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54948 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:30:31.771 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.771 12:52:50 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.771 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.771 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.771 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.771 12:52:50 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.771 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.771 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.771 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.771 12:52:50 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.771 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.771 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.771 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.771 12:52:50 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.771 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.771 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.771 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.771 12:52:50 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.771 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.771 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.771 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.771 12:52:50 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.771 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.771 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.771 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.771 12:52:50 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.771 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.771 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.771 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.771 12:52:50 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.771 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.771 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.771 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.771 12:52:50 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.771 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.771 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.771 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.771 12:52:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.771 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.771 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.771 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.771 12:52:50 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.771 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.771 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.771 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.771 12:52:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.771 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.771 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.771 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.771 12:52:50 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.771 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.771 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.771 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.771 12:52:50 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.771 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.771 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.771 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.771 12:52:50 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.771 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.771 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.771 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.771 12:52:50 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.771 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.771 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.771 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.771 12:52:50 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.771 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.771 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.771 12:52:50 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.771 12:52:50 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.771 12:52:50 -- setup/common.sh@32 -- # continue 00:30:31.771 12:52:50 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.771 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.771 12:52:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.771 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.771 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.771 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.771 12:52:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.771 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.771 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.771 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.771 12:52:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.771 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.771 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.771 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.771 12:52:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.771 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.771 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.771 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.771 12:52:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.771 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.771 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.771 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.771 12:52:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.771 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.771 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.771 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.771 12:52:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.771 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.771 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.771 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.771 12:52:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.771 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.771 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.771 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.771 12:52:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.771 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.771 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.771 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.771 12:52:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.771 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.771 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.771 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.771 12:52:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.771 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.771 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.771 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.771 12:52:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.771 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.771 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.771 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.771 12:52:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.771 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.771 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.771 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.771 12:52:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.771 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.771 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.771 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.771 12:52:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.771 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.771 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.771 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.771 12:52:51 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.771 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.771 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.771 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.771 12:52:51 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.771 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.771 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.772 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.772 12:52:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.772 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.772 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.772 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.772 12:52:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.772 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.772 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.772 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.772 12:52:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.772 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.772 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.772 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.772 12:52:51 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.772 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.772 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.772 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.772 12:52:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.772 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.772 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.772 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.772 12:52:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.772 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.772 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.772 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.772 12:52:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.772 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.772 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.772 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.772 12:52:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.772 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.772 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.772 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.772 12:52:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.772 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.772 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.772 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.772 12:52:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.772 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.772 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.772 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.772 12:52:51 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.772 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.772 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.772 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.772 12:52:51 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.772 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.772 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.772 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.772 12:52:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.772 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.772 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.772 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.772 12:52:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.772 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.772 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.772 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.772 12:52:51 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.772 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.772 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.772 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.772 12:52:51 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.772 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.772 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.772 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.772 12:52:51 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.772 12:52:51 -- setup/common.sh@33 -- # echo 0 00:30:31.772 12:52:51 -- setup/common.sh@33 -- # return 0 00:30:31.772 12:52:51 -- setup/hugepages.sh@99 -- # surp=0 00:30:31.772 12:52:51 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:30:31.772 12:52:51 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:30:31.772 12:52:51 -- setup/common.sh@18 -- # local node= 00:30:31.772 12:52:51 -- setup/common.sh@19 -- # local var val 00:30:31.772 12:52:51 -- setup/common.sh@20 -- # local mem_f mem 00:30:31.772 12:52:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:30:31.772 12:52:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:30:31.772 12:52:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:30:31.772 12:52:51 -- setup/common.sh@28 -- # mapfile -t mem 00:30:31.772 12:52:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:30:31.772 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.772 12:52:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6847548 kB' 'MemAvailable: 9459712 kB' 'Buffers: 2436 kB' 'Cached: 2814744 kB' 'SwapCached: 0 kB' 'Active: 491500 kB' 'Inactive: 2444472 kB' 'Active(anon): 129260 kB' 'Inactive(anon): 0 kB' 'Active(file): 362240 kB' 'Inactive(file): 2444472 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 120624 kB' 'Mapped: 48692 kB' 'Shmem: 10468 kB' 'KReclaimable: 84780 kB' 'Slab: 164548 kB' 'SReclaimable: 84780 kB' 'SUnreclaim: 79768 kB' 'KernelStack: 6592 kB' 'PageTables: 4368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 351704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54964 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:30:31.772 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.772 12:52:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:31.772 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.772 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.772 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.772 12:52:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:31.772 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.772 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.772 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.772 12:52:51 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:31.772 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.772 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.772 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.772 12:52:51 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:31.772 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.772 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.772 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.772 12:52:51 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:31.772 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.772 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.772 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.772 12:52:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:31.772 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.772 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.772 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.772 12:52:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:31.772 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.772 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.772 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.772 12:52:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:31.772 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.772 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.772 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.772 12:52:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:31.772 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.772 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.772 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.772 12:52:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:31.772 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.772 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.772 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.772 12:52:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:31.772 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.772 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.772 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.772 12:52:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:31.772 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.772 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.772 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.772 12:52:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:31.772 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.772 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.772 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.772 12:52:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:31.772 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.772 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.772 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.772 12:52:51 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:31.772 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.773 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.773 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.773 12:52:51 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:31.773 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.773 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.773 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.773 12:52:51 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:31.773 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.773 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.773 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.773 12:52:51 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:31.773 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.773 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.773 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.773 12:52:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:31.773 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.773 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.773 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.773 12:52:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:31.773 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.773 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.773 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.773 12:52:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:31.773 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.773 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.773 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.773 12:52:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:31.773 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.773 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.773 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.773 12:52:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:31.773 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.773 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.773 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.773 12:52:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:31.773 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.773 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.773 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.773 12:52:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:31.773 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.773 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.773 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.773 12:52:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:31.773 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.773 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.773 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.773 12:52:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:31.773 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.773 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.773 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.773 12:52:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:31.773 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.773 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.773 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.773 12:52:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:31.773 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.773 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.773 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.773 12:52:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:31.773 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.773 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.773 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.773 12:52:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:31.773 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.773 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.773 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.773 12:52:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:31.773 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.773 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.773 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.773 12:52:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:31.773 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.773 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.773 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.773 12:52:51 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:31.773 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.773 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.773 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.773 12:52:51 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:31.773 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.773 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.773 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.773 12:52:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:31.773 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.773 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.773 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.773 12:52:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:31.773 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.773 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.773 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.773 12:52:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:31.773 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.773 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.773 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.773 12:52:51 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:31.773 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.773 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.773 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.773 12:52:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:31.773 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.773 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.773 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.773 12:52:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:31.773 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.773 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.773 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.773 12:52:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:31.773 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.773 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.773 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.773 12:52:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:31.773 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.773 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.773 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.773 12:52:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:31.773 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.773 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.773 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.773 12:52:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:31.773 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.773 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.773 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.773 12:52:51 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:31.773 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.773 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.773 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.773 12:52:51 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:31.773 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.773 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.773 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.773 12:52:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:31.773 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.773 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.773 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.773 12:52:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:31.773 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.773 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.773 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.773 12:52:51 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:31.773 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.773 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.773 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.773 12:52:51 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:31.773 12:52:51 -- setup/common.sh@33 -- # echo 0 00:30:31.773 12:52:51 -- setup/common.sh@33 -- # return 0 00:30:31.773 nr_hugepages=1024 00:30:31.773 resv_hugepages=0 00:30:31.773 surplus_hugepages=0 00:30:31.773 anon_hugepages=0 00:30:31.773 12:52:51 -- setup/hugepages.sh@100 -- # resv=0 00:30:31.773 12:52:51 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:30:31.773 12:52:51 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:30:31.773 12:52:51 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:30:31.773 12:52:51 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:30:31.773 12:52:51 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:30:31.773 12:52:51 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:30:31.773 12:52:51 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:30:31.773 12:52:51 -- setup/common.sh@17 -- # local get=HugePages_Total 00:30:31.773 12:52:51 -- setup/common.sh@18 -- # local node= 00:30:31.773 12:52:51 -- setup/common.sh@19 -- # local var val 00:30:31.774 12:52:51 -- setup/common.sh@20 -- # local mem_f mem 00:30:31.774 12:52:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:30:31.774 12:52:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:30:31.774 12:52:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:30:31.774 12:52:51 -- setup/common.sh@28 -- # mapfile -t mem 00:30:31.774 12:52:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:30:31.774 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.774 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.774 12:52:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6847548 kB' 'MemAvailable: 9459712 kB' 'Buffers: 2436 kB' 'Cached: 2814744 kB' 'SwapCached: 0 kB' 'Active: 491464 kB' 'Inactive: 2444472 kB' 'Active(anon): 129224 kB' 'Inactive(anon): 0 kB' 'Active(file): 362240 kB' 'Inactive(file): 2444472 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 120332 kB' 'Mapped: 48692 kB' 'Shmem: 10468 kB' 'KReclaimable: 84780 kB' 'Slab: 164544 kB' 'SReclaimable: 84780 kB' 'SUnreclaim: 79764 kB' 'KernelStack: 6576 kB' 'PageTables: 4312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 351704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54964 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:30:31.774 12:52:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:31.774 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.774 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.774 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.774 12:52:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:31.774 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.774 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.774 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.774 12:52:51 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:31.774 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.774 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.774 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.774 12:52:51 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:31.774 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.774 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.774 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.774 12:52:51 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:31.774 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.774 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.774 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.774 12:52:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:31.774 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.774 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.774 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.774 12:52:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:31.774 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.774 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.774 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.774 12:52:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:31.774 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.774 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.774 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.774 12:52:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:31.774 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.774 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.774 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.774 12:52:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:31.774 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.774 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.774 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.774 12:52:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:31.774 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.774 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.774 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.774 12:52:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:31.774 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.774 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.774 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.774 12:52:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:31.774 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.774 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.774 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.774 12:52:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:31.774 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.774 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.774 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.774 12:52:51 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:31.774 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.774 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.774 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.774 12:52:51 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:31.774 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.774 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.774 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.774 12:52:51 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:31.774 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.774 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.774 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.774 12:52:51 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:31.774 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.774 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.774 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.774 12:52:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:31.774 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.774 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.774 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.774 12:52:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:31.774 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.774 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.774 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.774 12:52:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:31.774 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.774 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.774 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.774 12:52:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:31.774 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.774 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.774 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.774 12:52:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:31.774 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.774 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.774 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.774 12:52:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:31.774 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.774 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.774 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.774 12:52:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:31.774 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.774 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.774 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.774 12:52:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:31.775 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.775 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.775 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.775 12:52:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:31.775 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.775 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.775 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.775 12:52:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:31.775 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.775 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.775 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.775 12:52:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:31.775 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.775 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.775 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.775 12:52:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:31.775 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.775 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.775 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.775 12:52:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:31.775 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.775 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.775 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.775 12:52:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:31.775 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.775 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.775 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.775 12:52:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:31.775 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.775 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.775 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.775 12:52:51 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:31.775 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.775 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.775 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.775 12:52:51 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:31.775 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.775 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.775 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.775 12:52:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:31.775 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.775 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.775 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.775 12:52:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:31.775 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.775 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.775 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.775 12:52:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:31.775 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.775 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.775 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.775 12:52:51 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:31.775 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.775 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.775 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.775 12:52:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:31.775 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.775 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.775 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.775 12:52:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:31.775 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.775 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.775 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.775 12:52:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:31.775 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.775 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.775 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.775 12:52:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:31.775 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.775 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.775 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.775 12:52:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:31.775 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.775 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.775 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.775 12:52:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:31.775 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.775 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.775 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.775 12:52:51 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:31.775 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.775 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.775 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.775 12:52:51 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:31.775 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.775 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.775 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.775 12:52:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:31.775 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.775 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.775 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.775 12:52:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:31.775 12:52:51 -- setup/common.sh@33 -- # echo 1024 00:30:31.775 12:52:51 -- setup/common.sh@33 -- # return 0 00:30:31.775 12:52:51 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:30:31.775 12:52:51 -- setup/hugepages.sh@112 -- # get_nodes 00:30:31.775 12:52:51 -- setup/hugepages.sh@27 -- # local node 00:30:31.775 12:52:51 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:30:31.775 12:52:51 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:30:31.775 12:52:51 -- setup/hugepages.sh@32 -- # no_nodes=1 00:30:31.775 12:52:51 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:30:31.775 12:52:51 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:30:31.775 12:52:51 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:30:31.775 12:52:51 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:30:31.775 12:52:51 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:30:31.775 12:52:51 -- setup/common.sh@18 -- # local node=0 00:30:31.775 12:52:51 -- setup/common.sh@19 -- # local var val 00:30:31.775 12:52:51 -- setup/common.sh@20 -- # local mem_f mem 00:30:31.775 12:52:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:30:31.775 12:52:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:30:31.775 12:52:51 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:30:31.775 12:52:51 -- setup/common.sh@28 -- # mapfile -t mem 00:30:31.775 12:52:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:30:31.775 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.775 12:52:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6847548 kB' 'MemUsed: 5394432 kB' 'SwapCached: 0 kB' 'Active: 491732 kB' 'Inactive: 2444472 kB' 'Active(anon): 129492 kB' 'Inactive(anon): 0 kB' 'Active(file): 362240 kB' 'Inactive(file): 2444472 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'FilePages: 2817180 kB' 'Mapped: 48692 kB' 'AnonPages: 120604 kB' 'Shmem: 10468 kB' 'KernelStack: 6592 kB' 'PageTables: 4368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 84780 kB' 'Slab: 164544 kB' 'SReclaimable: 84780 kB' 'SUnreclaim: 79764 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:30:31.775 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.775 12:52:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.775 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.775 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.775 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.775 12:52:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.775 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.775 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.775 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.775 12:52:51 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.775 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.775 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.775 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.775 12:52:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.775 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.775 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.775 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.775 12:52:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.775 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.775 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.775 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.775 12:52:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.775 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.775 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.775 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.775 12:52:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.775 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.775 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.775 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.775 12:52:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.775 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.776 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.776 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.776 12:52:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.776 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.776 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.776 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.776 12:52:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.776 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.776 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.776 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.776 12:52:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.776 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.776 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.776 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.776 12:52:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.776 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.776 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.776 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.776 12:52:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.776 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.776 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.776 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.776 12:52:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.776 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.776 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.776 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.776 12:52:51 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.776 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.776 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.776 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.776 12:52:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.776 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.776 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.776 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.776 12:52:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.776 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.776 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.776 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.776 12:52:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.776 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.776 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.776 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.776 12:52:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.776 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.776 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.776 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.776 12:52:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.776 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.776 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.776 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.776 12:52:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.776 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.776 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.776 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.776 12:52:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.776 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.776 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.776 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.776 12:52:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.776 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.776 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.776 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.776 12:52:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.776 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.776 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.776 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.776 12:52:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.776 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.776 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.776 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.776 12:52:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.776 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.776 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.776 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.776 12:52:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.776 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.776 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.776 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.776 12:52:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.776 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.776 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.776 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.776 12:52:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.776 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.776 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.776 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.776 12:52:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.776 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.776 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.776 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.776 12:52:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.776 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.776 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.776 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.776 12:52:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.776 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.776 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.776 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.776 12:52:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.776 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.776 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.776 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.776 12:52:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.776 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.776 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.776 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.776 12:52:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.776 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.776 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.776 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.776 12:52:51 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.776 12:52:51 -- setup/common.sh@32 -- # continue 00:30:31.776 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:31.776 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:31.776 12:52:51 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:31.776 12:52:51 -- setup/common.sh@33 -- # echo 0 00:30:31.776 12:52:51 -- setup/common.sh@33 -- # return 0 00:30:31.776 12:52:51 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:30:31.776 12:52:51 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:30:31.776 12:52:51 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:30:31.776 12:52:51 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:30:31.776 node0=1024 expecting 1024 00:30:31.776 12:52:51 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:30:31.776 ************************************ 00:30:31.776 END TEST even_2G_alloc 00:30:31.776 ************************************ 00:30:31.776 12:52:51 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:30:31.776 00:30:31.776 real 0m0.563s 00:30:31.776 user 0m0.282s 00:30:31.776 sys 0m0.277s 00:30:31.776 12:52:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:31.776 12:52:51 -- common/autotest_common.sh@10 -- # set +x 00:30:31.776 12:52:51 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:30:31.776 12:52:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:31.776 12:52:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:31.776 12:52:51 -- common/autotest_common.sh@10 -- # set +x 00:30:32.035 ************************************ 00:30:32.035 START TEST odd_alloc 00:30:32.035 ************************************ 00:30:32.035 12:52:51 -- common/autotest_common.sh@1104 -- # odd_alloc 00:30:32.035 12:52:51 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:30:32.035 12:52:51 -- setup/hugepages.sh@49 -- # local size=2098176 00:30:32.035 12:52:51 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:30:32.035 12:52:51 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:30:32.035 12:52:51 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:30:32.035 12:52:51 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:30:32.035 12:52:51 -- setup/hugepages.sh@62 -- # user_nodes=() 00:30:32.035 12:52:51 -- setup/hugepages.sh@62 -- # local user_nodes 00:30:32.035 12:52:51 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:30:32.035 12:52:51 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:30:32.035 12:52:51 -- setup/hugepages.sh@67 -- # nodes_test=() 00:30:32.035 12:52:51 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:30:32.035 12:52:51 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:30:32.035 12:52:51 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:30:32.035 12:52:51 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:30:32.035 12:52:51 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:30:32.035 12:52:51 -- setup/hugepages.sh@83 -- # : 0 00:30:32.035 12:52:51 -- setup/hugepages.sh@84 -- # : 0 00:30:32.035 12:52:51 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:30:32.035 12:52:51 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:30:32.035 12:52:51 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:30:32.035 12:52:51 -- setup/hugepages.sh@160 -- # setup output 00:30:32.035 12:52:51 -- setup/common.sh@9 -- # [[ output == output ]] 00:30:32.035 12:52:51 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:30:32.296 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:32.296 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:30:32.296 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:30:32.296 12:52:51 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:30:32.296 12:52:51 -- setup/hugepages.sh@89 -- # local node 00:30:32.296 12:52:51 -- setup/hugepages.sh@90 -- # local sorted_t 00:30:32.296 12:52:51 -- setup/hugepages.sh@91 -- # local sorted_s 00:30:32.296 12:52:51 -- setup/hugepages.sh@92 -- # local surp 00:30:32.296 12:52:51 -- setup/hugepages.sh@93 -- # local resv 00:30:32.296 12:52:51 -- setup/hugepages.sh@94 -- # local anon 00:30:32.296 12:52:51 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:30:32.296 12:52:51 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:30:32.296 12:52:51 -- setup/common.sh@17 -- # local get=AnonHugePages 00:30:32.296 12:52:51 -- setup/common.sh@18 -- # local node= 00:30:32.296 12:52:51 -- setup/common.sh@19 -- # local var val 00:30:32.296 12:52:51 -- setup/common.sh@20 -- # local mem_f mem 00:30:32.296 12:52:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:30:32.296 12:52:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:30:32.296 12:52:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:30:32.296 12:52:51 -- setup/common.sh@28 -- # mapfile -t mem 00:30:32.296 12:52:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:30:32.296 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.296 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.297 12:52:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6840064 kB' 'MemAvailable: 9452228 kB' 'Buffers: 2436 kB' 'Cached: 2814744 kB' 'SwapCached: 0 kB' 'Active: 492208 kB' 'Inactive: 2444472 kB' 'Active(anon): 129968 kB' 'Inactive(anon): 0 kB' 'Active(file): 362240 kB' 'Inactive(file): 2444472 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 121084 kB' 'Mapped: 48992 kB' 'Shmem: 10468 kB' 'KReclaimable: 84780 kB' 'Slab: 164632 kB' 'SReclaimable: 84780 kB' 'SUnreclaim: 79852 kB' 'KernelStack: 6648 kB' 'PageTables: 4408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 351704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54964 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:30:32.297 12:52:51 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:32.297 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.297 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.297 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.297 12:52:51 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:32.297 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.297 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.297 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.297 12:52:51 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:32.297 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.297 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.297 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.297 12:52:51 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:32.297 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.297 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.297 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.297 12:52:51 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:32.297 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.297 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.297 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.297 12:52:51 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:32.297 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.297 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.297 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.297 12:52:51 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:32.297 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.297 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.297 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.297 12:52:51 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:32.297 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.297 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.297 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.297 12:52:51 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:32.297 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.297 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.297 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.297 12:52:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:32.297 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.297 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.297 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.297 12:52:51 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:32.297 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.297 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.297 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.297 12:52:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:32.297 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.297 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.297 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.297 12:52:51 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:32.297 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.297 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.297 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.297 12:52:51 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:32.297 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.297 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.297 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.297 12:52:51 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:32.297 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.297 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.297 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.297 12:52:51 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:32.297 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.297 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.297 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.297 12:52:51 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:32.297 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.297 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.297 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.297 12:52:51 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:32.297 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.297 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.297 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.297 12:52:51 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:32.297 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.297 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.297 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.297 12:52:51 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:32.297 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.297 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.297 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.297 12:52:51 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:32.297 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.297 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.297 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.297 12:52:51 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:32.297 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.297 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.297 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.297 12:52:51 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:32.297 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.297 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.297 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.297 12:52:51 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:32.297 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.297 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.297 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.297 12:52:51 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:32.297 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.297 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.297 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.297 12:52:51 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:32.297 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.297 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.297 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.297 12:52:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:32.297 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.297 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.297 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.297 12:52:51 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:32.297 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.297 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.297 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.297 12:52:51 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:32.297 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.297 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.297 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.297 12:52:51 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:32.297 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.297 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.297 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.297 12:52:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:32.297 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.297 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.297 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.297 12:52:51 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:32.297 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.297 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.297 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.297 12:52:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:32.297 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.297 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.297 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.297 12:52:51 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:32.297 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.297 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.297 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.297 12:52:51 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:32.297 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.297 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.297 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.297 12:52:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:32.297 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.297 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.297 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.297 12:52:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:32.297 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.297 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.297 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.297 12:52:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:32.297 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.297 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.297 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.298 12:52:51 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:32.298 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.298 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.298 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.298 12:52:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:32.298 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.298 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.298 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.298 12:52:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:32.298 12:52:51 -- setup/common.sh@33 -- # echo 0 00:30:32.298 12:52:51 -- setup/common.sh@33 -- # return 0 00:30:32.298 12:52:51 -- setup/hugepages.sh@97 -- # anon=0 00:30:32.298 12:52:51 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:30:32.298 12:52:51 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:30:32.298 12:52:51 -- setup/common.sh@18 -- # local node= 00:30:32.298 12:52:51 -- setup/common.sh@19 -- # local var val 00:30:32.298 12:52:51 -- setup/common.sh@20 -- # local mem_f mem 00:30:32.298 12:52:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:30:32.298 12:52:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:30:32.298 12:52:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:30:32.298 12:52:51 -- setup/common.sh@28 -- # mapfile -t mem 00:30:32.298 12:52:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:30:32.298 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.298 12:52:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6840064 kB' 'MemAvailable: 9452228 kB' 'Buffers: 2436 kB' 'Cached: 2814744 kB' 'SwapCached: 0 kB' 'Active: 491928 kB' 'Inactive: 2444472 kB' 'Active(anon): 129688 kB' 'Inactive(anon): 0 kB' 'Active(file): 362240 kB' 'Inactive(file): 2444472 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 120848 kB' 'Mapped: 48868 kB' 'Shmem: 10468 kB' 'KReclaimable: 84780 kB' 'Slab: 164636 kB' 'SReclaimable: 84780 kB' 'SUnreclaim: 79856 kB' 'KernelStack: 6656 kB' 'PageTables: 4544 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 351704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54948 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:30:32.298 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.298 12:52:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.298 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.298 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.298 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.298 12:52:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.298 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.298 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.298 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.298 12:52:51 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.298 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.298 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.298 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.298 12:52:51 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.298 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.298 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.298 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.298 12:52:51 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.298 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.298 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.298 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.298 12:52:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.298 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.298 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.298 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.298 12:52:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.298 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.298 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.298 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.298 12:52:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.298 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.298 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.298 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.298 12:52:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.298 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.298 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.298 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.298 12:52:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.298 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.298 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.298 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.298 12:52:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.298 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.298 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.298 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.298 12:52:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.298 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.298 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.298 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.298 12:52:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.298 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.298 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.298 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.298 12:52:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.298 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.298 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.298 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.298 12:52:51 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.298 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.298 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.298 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.298 12:52:51 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.298 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.298 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.298 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.298 12:52:51 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.298 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.298 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.298 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.298 12:52:51 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.298 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.298 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.298 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.298 12:52:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.298 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.298 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.298 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.298 12:52:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.298 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.298 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.298 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.298 12:52:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.298 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.298 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.298 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.298 12:52:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.298 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.298 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.298 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.298 12:52:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.298 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.298 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.298 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.298 12:52:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.298 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.298 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.298 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.298 12:52:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.298 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.298 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.298 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.298 12:52:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.298 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.298 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.298 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.298 12:52:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.298 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.298 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.298 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.298 12:52:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.298 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.298 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.298 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.298 12:52:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.298 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.299 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.299 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.299 12:52:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.299 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.299 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.299 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.299 12:52:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.299 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.299 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.299 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.299 12:52:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.299 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.299 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.299 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.299 12:52:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.299 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.299 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.299 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.299 12:52:51 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.299 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.299 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.299 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.299 12:52:51 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.299 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.299 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.299 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.299 12:52:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.299 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.299 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.299 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.299 12:52:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.299 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.299 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.299 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.299 12:52:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.299 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.299 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.299 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.299 12:52:51 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.299 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.299 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.299 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.299 12:52:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.299 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.299 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.299 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.299 12:52:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.299 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.299 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.299 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.299 12:52:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.299 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.299 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.299 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.299 12:52:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.299 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.299 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.299 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.299 12:52:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.299 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.299 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.299 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.299 12:52:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.299 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.299 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.299 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.299 12:52:51 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.299 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.299 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.299 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.299 12:52:51 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.299 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.299 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.299 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.299 12:52:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.299 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.299 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.299 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.299 12:52:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.299 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.299 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.299 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.299 12:52:51 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.299 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.299 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.299 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.299 12:52:51 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.299 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.299 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.299 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.299 12:52:51 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.299 12:52:51 -- setup/common.sh@33 -- # echo 0 00:30:32.299 12:52:51 -- setup/common.sh@33 -- # return 0 00:30:32.299 12:52:51 -- setup/hugepages.sh@99 -- # surp=0 00:30:32.299 12:52:51 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:30:32.299 12:52:51 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:30:32.299 12:52:51 -- setup/common.sh@18 -- # local node= 00:30:32.299 12:52:51 -- setup/common.sh@19 -- # local var val 00:30:32.299 12:52:51 -- setup/common.sh@20 -- # local mem_f mem 00:30:32.299 12:52:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:30:32.299 12:52:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:30:32.299 12:52:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:30:32.299 12:52:51 -- setup/common.sh@28 -- # mapfile -t mem 00:30:32.299 12:52:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:30:32.299 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.299 12:52:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6840064 kB' 'MemAvailable: 9452228 kB' 'Buffers: 2436 kB' 'Cached: 2814744 kB' 'SwapCached: 0 kB' 'Active: 491948 kB' 'Inactive: 2444472 kB' 'Active(anon): 129708 kB' 'Inactive(anon): 0 kB' 'Active(file): 362240 kB' 'Inactive(file): 2444472 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 120848 kB' 'Mapped: 48868 kB' 'Shmem: 10468 kB' 'KReclaimable: 84780 kB' 'Slab: 164636 kB' 'SReclaimable: 84780 kB' 'SUnreclaim: 79856 kB' 'KernelStack: 6656 kB' 'PageTables: 4544 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 351704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54964 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:30:32.299 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.299 12:52:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:32.299 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.299 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.299 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.299 12:52:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:32.299 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.299 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.299 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.299 12:52:51 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:32.299 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.299 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.299 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.299 12:52:51 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:32.299 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.299 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.299 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.299 12:52:51 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:32.299 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.299 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.299 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.299 12:52:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:32.299 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.299 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.299 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.299 12:52:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:32.299 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.299 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.299 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.299 12:52:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:32.299 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.299 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.299 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.300 12:52:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:32.300 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.300 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.300 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.300 12:52:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:32.300 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.300 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.300 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.300 12:52:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:32.300 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.300 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.300 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.300 12:52:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:32.300 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.300 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.300 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.300 12:52:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:32.300 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.300 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.300 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.300 12:52:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:32.300 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.300 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.300 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.300 12:52:51 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:32.300 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.300 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.300 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.300 12:52:51 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:32.300 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.300 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.300 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.300 12:52:51 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:32.300 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.300 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.300 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.300 12:52:51 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:32.300 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.300 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.300 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.300 12:52:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:32.300 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.300 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.300 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.300 12:52:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:32.300 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.300 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.300 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.300 12:52:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:32.300 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.300 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.300 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.300 12:52:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:32.300 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.300 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.300 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.300 12:52:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:32.300 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.300 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.300 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.300 12:52:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:32.300 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.300 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.300 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.300 12:52:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:32.300 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.300 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.300 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.300 12:52:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:32.300 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.300 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.300 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.300 12:52:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:32.300 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.300 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.300 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.300 12:52:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:32.300 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.300 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.300 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.300 12:52:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:32.300 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.300 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.300 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.300 12:52:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:32.300 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.300 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.300 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.300 12:52:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:32.300 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.300 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.300 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.300 12:52:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:32.300 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.300 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.300 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.300 12:52:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:32.300 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.300 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.300 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.300 12:52:51 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:32.300 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.300 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.300 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.300 12:52:51 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:32.300 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.300 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.300 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.300 12:52:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:32.300 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.300 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.300 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.300 12:52:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:32.300 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.300 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.300 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.300 12:52:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:32.300 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.300 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.300 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.300 12:52:51 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:32.300 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.300 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.300 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.300 12:52:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:32.300 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.300 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.300 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.300 12:52:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:32.300 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.300 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.300 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.300 12:52:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:32.300 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.300 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.300 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.300 12:52:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:32.300 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.300 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.300 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.300 12:52:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:32.300 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.300 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.300 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.300 12:52:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:32.300 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.300 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.300 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.300 12:52:51 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:32.300 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.300 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.300 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.300 12:52:51 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:32.300 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.300 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.300 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.300 12:52:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:32.300 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.301 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.301 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.301 12:52:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:32.301 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.301 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.301 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.301 12:52:51 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:32.301 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.301 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.301 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.301 12:52:51 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:32.301 12:52:51 -- setup/common.sh@33 -- # echo 0 00:30:32.301 12:52:51 -- setup/common.sh@33 -- # return 0 00:30:32.301 nr_hugepages=1025 00:30:32.301 12:52:51 -- setup/hugepages.sh@100 -- # resv=0 00:30:32.301 12:52:51 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:30:32.301 resv_hugepages=0 00:30:32.301 12:52:51 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:30:32.301 surplus_hugepages=0 00:30:32.301 anon_hugepages=0 00:30:32.301 12:52:51 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:30:32.301 12:52:51 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:30:32.301 12:52:51 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:30:32.301 12:52:51 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:30:32.301 12:52:51 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:30:32.301 12:52:51 -- setup/common.sh@17 -- # local get=HugePages_Total 00:30:32.301 12:52:51 -- setup/common.sh@18 -- # local node= 00:30:32.301 12:52:51 -- setup/common.sh@19 -- # local var val 00:30:32.301 12:52:51 -- setup/common.sh@20 -- # local mem_f mem 00:30:32.301 12:52:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:30:32.301 12:52:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:30:32.301 12:52:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:30:32.301 12:52:51 -- setup/common.sh@28 -- # mapfile -t mem 00:30:32.301 12:52:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:30:32.301 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.301 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.301 12:52:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6840064 kB' 'MemAvailable: 9452228 kB' 'Buffers: 2436 kB' 'Cached: 2814744 kB' 'SwapCached: 0 kB' 'Active: 491912 kB' 'Inactive: 2444472 kB' 'Active(anon): 129672 kB' 'Inactive(anon): 0 kB' 'Active(file): 362240 kB' 'Inactive(file): 2444472 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 120824 kB' 'Mapped: 48868 kB' 'Shmem: 10468 kB' 'KReclaimable: 84780 kB' 'Slab: 164636 kB' 'SReclaimable: 84780 kB' 'SUnreclaim: 79856 kB' 'KernelStack: 6672 kB' 'PageTables: 4600 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 351704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54964 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:30:32.301 12:52:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:32.301 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.301 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.301 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.301 12:52:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:32.301 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.301 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.301 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.301 12:52:51 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:32.301 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.301 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.301 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.301 12:52:51 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:32.301 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.301 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.301 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.301 12:52:51 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:32.301 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.301 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.301 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.301 12:52:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:32.301 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.301 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.301 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.301 12:52:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:32.301 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.301 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.301 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.301 12:52:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:32.301 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.301 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.301 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.301 12:52:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:32.301 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.301 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.301 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.301 12:52:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:32.301 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.301 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.301 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.301 12:52:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:32.301 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.301 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.301 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.301 12:52:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:32.301 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.301 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.301 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.301 12:52:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:32.301 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.301 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.301 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.301 12:52:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:32.301 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.301 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.301 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.301 12:52:51 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:32.301 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.301 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.301 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.301 12:52:51 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:32.301 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.301 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.301 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.301 12:52:51 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:32.301 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.301 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.301 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.301 12:52:51 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:32.301 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.301 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.301 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.301 12:52:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:32.301 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.302 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.302 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.302 12:52:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:32.302 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.302 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.302 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.302 12:52:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:32.302 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.302 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.302 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.302 12:52:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:32.302 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.302 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.302 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.302 12:52:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:32.302 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.302 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.302 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.302 12:52:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:32.302 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.302 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.302 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.302 12:52:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:32.302 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.302 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.302 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.302 12:52:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:32.302 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.302 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.302 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.302 12:52:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:32.302 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.302 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.302 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.302 12:52:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:32.302 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.302 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.302 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.302 12:52:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:32.302 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.302 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.302 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.302 12:52:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:32.302 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.302 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.302 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.302 12:52:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:32.302 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.302 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.302 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.302 12:52:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:32.302 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.302 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.302 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.302 12:52:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:32.302 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.302 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.302 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.302 12:52:51 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:32.302 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.302 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.302 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.302 12:52:51 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:32.302 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.302 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.302 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.302 12:52:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:32.302 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.302 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.302 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.302 12:52:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:32.302 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.302 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.302 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.302 12:52:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:32.302 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.302 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.302 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.302 12:52:51 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:32.302 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.302 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.302 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.302 12:52:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:32.302 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.302 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.302 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.302 12:52:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:32.302 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.302 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.302 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.302 12:52:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:32.302 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.302 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.302 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.302 12:52:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:32.302 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.302 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.302 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.302 12:52:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:32.302 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.302 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.302 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.302 12:52:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:32.302 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.302 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.302 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.302 12:52:51 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:32.302 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.302 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.302 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.302 12:52:51 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:32.302 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.302 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.302 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.302 12:52:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:32.302 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.302 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.302 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.302 12:52:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:32.302 12:52:51 -- setup/common.sh@33 -- # echo 1025 00:30:32.302 12:52:51 -- setup/common.sh@33 -- # return 0 00:30:32.302 12:52:51 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:30:32.302 12:52:51 -- setup/hugepages.sh@112 -- # get_nodes 00:30:32.302 12:52:51 -- setup/hugepages.sh@27 -- # local node 00:30:32.302 12:52:51 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:30:32.302 12:52:51 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:30:32.302 12:52:51 -- setup/hugepages.sh@32 -- # no_nodes=1 00:30:32.302 12:52:51 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:30:32.302 12:52:51 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:30:32.302 12:52:51 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:30:32.562 12:52:51 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:30:32.562 12:52:51 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:30:32.562 12:52:51 -- setup/common.sh@18 -- # local node=0 00:30:32.562 12:52:51 -- setup/common.sh@19 -- # local var val 00:30:32.562 12:52:51 -- setup/common.sh@20 -- # local mem_f mem 00:30:32.562 12:52:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:30:32.562 12:52:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:30:32.562 12:52:51 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:30:32.562 12:52:51 -- setup/common.sh@28 -- # mapfile -t mem 00:30:32.562 12:52:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:30:32.562 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.562 12:52:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6840064 kB' 'MemUsed: 5401916 kB' 'SwapCached: 0 kB' 'Active: 491668 kB' 'Inactive: 2444472 kB' 'Active(anon): 129428 kB' 'Inactive(anon): 0 kB' 'Active(file): 362240 kB' 'Inactive(file): 2444472 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'FilePages: 2817180 kB' 'Mapped: 48868 kB' 'AnonPages: 120544 kB' 'Shmem: 10468 kB' 'KernelStack: 6640 kB' 'PageTables: 4488 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 84780 kB' 'Slab: 164628 kB' 'SReclaimable: 84780 kB' 'SUnreclaim: 79848 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:30:32.562 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.563 12:52:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.563 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.563 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.563 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.563 12:52:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.563 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.563 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.563 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.563 12:52:51 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.563 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.563 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.563 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.563 12:52:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.563 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.563 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.563 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.563 12:52:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.563 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.563 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.563 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.563 12:52:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.563 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.563 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.563 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.563 12:52:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.563 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.563 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.563 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.563 12:52:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.563 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.563 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.563 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.563 12:52:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.563 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.563 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.563 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.563 12:52:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.563 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.563 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.563 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.563 12:52:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.563 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.563 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.563 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.563 12:52:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.563 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.563 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.563 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.563 12:52:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.563 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.563 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.563 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.563 12:52:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.563 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.563 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.563 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.563 12:52:51 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.563 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.563 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.563 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.563 12:52:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.563 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.563 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.563 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.563 12:52:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.563 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.563 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.563 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.563 12:52:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.563 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.563 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.563 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.563 12:52:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.563 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.563 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.563 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.563 12:52:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.563 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.563 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.563 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.563 12:52:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.563 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.563 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.563 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.563 12:52:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.563 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.563 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.563 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.563 12:52:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.563 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.563 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.563 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.563 12:52:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.563 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.563 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.563 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.563 12:52:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.563 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.563 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.563 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.563 12:52:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.563 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.563 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.563 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.563 12:52:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.563 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.563 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.563 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.563 12:52:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.563 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.563 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.563 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.563 12:52:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.563 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.563 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.563 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.563 12:52:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.563 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.563 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.563 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.563 12:52:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.563 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.563 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.563 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.563 12:52:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.563 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.563 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.563 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.563 12:52:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.563 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.563 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.563 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.563 12:52:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.563 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.563 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.563 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.563 12:52:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.563 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.563 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.563 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.563 12:52:51 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.563 12:52:51 -- setup/common.sh@32 -- # continue 00:30:32.563 12:52:51 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.563 12:52:51 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.563 12:52:51 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.563 12:52:51 -- setup/common.sh@33 -- # echo 0 00:30:32.563 12:52:51 -- setup/common.sh@33 -- # return 0 00:30:32.563 12:52:51 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:30:32.563 12:52:51 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:30:32.563 12:52:51 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:30:32.563 12:52:51 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:30:32.563 12:52:51 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:30:32.563 node0=1025 expecting 1025 00:30:32.563 12:52:51 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:30:32.563 00:30:32.563 real 0m0.561s 00:30:32.563 user 0m0.269s 00:30:32.563 sys 0m0.291s 00:30:32.563 12:52:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:32.563 12:52:51 -- common/autotest_common.sh@10 -- # set +x 00:30:32.563 ************************************ 00:30:32.563 END TEST odd_alloc 00:30:32.563 ************************************ 00:30:32.563 12:52:51 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:30:32.563 12:52:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:32.563 12:52:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:32.563 12:52:51 -- common/autotest_common.sh@10 -- # set +x 00:30:32.563 ************************************ 00:30:32.563 START TEST custom_alloc 00:30:32.563 ************************************ 00:30:32.563 12:52:51 -- common/autotest_common.sh@1104 -- # custom_alloc 00:30:32.563 12:52:51 -- setup/hugepages.sh@167 -- # local IFS=, 00:30:32.564 12:52:51 -- setup/hugepages.sh@169 -- # local node 00:30:32.564 12:52:51 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:30:32.564 12:52:51 -- setup/hugepages.sh@170 -- # local nodes_hp 00:30:32.564 12:52:51 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:30:32.564 12:52:51 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:30:32.564 12:52:51 -- setup/hugepages.sh@49 -- # local size=1048576 00:30:32.564 12:52:51 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:30:32.564 12:52:51 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:30:32.564 12:52:51 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:30:32.564 12:52:51 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:30:32.564 12:52:51 -- setup/hugepages.sh@62 -- # user_nodes=() 00:30:32.564 12:52:51 -- setup/hugepages.sh@62 -- # local user_nodes 00:30:32.564 12:52:51 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:30:32.564 12:52:51 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:30:32.564 12:52:51 -- setup/hugepages.sh@67 -- # nodes_test=() 00:30:32.564 12:52:51 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:30:32.564 12:52:51 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:30:32.564 12:52:51 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:30:32.564 12:52:51 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:30:32.564 12:52:51 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:30:32.564 12:52:51 -- setup/hugepages.sh@83 -- # : 0 00:30:32.564 12:52:51 -- setup/hugepages.sh@84 -- # : 0 00:30:32.564 12:52:51 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:30:32.564 12:52:51 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:30:32.564 12:52:51 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:30:32.564 12:52:51 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:30:32.564 12:52:51 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:30:32.564 12:52:51 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:30:32.564 12:52:51 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:30:32.564 12:52:51 -- setup/hugepages.sh@62 -- # user_nodes=() 00:30:32.564 12:52:51 -- setup/hugepages.sh@62 -- # local user_nodes 00:30:32.564 12:52:51 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:30:32.564 12:52:51 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:30:32.564 12:52:51 -- setup/hugepages.sh@67 -- # nodes_test=() 00:30:32.564 12:52:51 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:30:32.564 12:52:51 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:30:32.564 12:52:51 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:30:32.564 12:52:51 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:30:32.564 12:52:51 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:30:32.564 12:52:51 -- setup/hugepages.sh@78 -- # return 0 00:30:32.564 12:52:51 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:30:32.564 12:52:51 -- setup/hugepages.sh@187 -- # setup output 00:30:32.564 12:52:51 -- setup/common.sh@9 -- # [[ output == output ]] 00:30:32.564 12:52:51 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:30:32.824 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:32.824 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:30:32.824 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:30:32.824 12:52:52 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:30:32.824 12:52:52 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:30:32.824 12:52:52 -- setup/hugepages.sh@89 -- # local node 00:30:32.824 12:52:52 -- setup/hugepages.sh@90 -- # local sorted_t 00:30:32.824 12:52:52 -- setup/hugepages.sh@91 -- # local sorted_s 00:30:32.824 12:52:52 -- setup/hugepages.sh@92 -- # local surp 00:30:32.824 12:52:52 -- setup/hugepages.sh@93 -- # local resv 00:30:32.824 12:52:52 -- setup/hugepages.sh@94 -- # local anon 00:30:32.824 12:52:52 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:30:32.824 12:52:52 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:30:32.824 12:52:52 -- setup/common.sh@17 -- # local get=AnonHugePages 00:30:32.824 12:52:52 -- setup/common.sh@18 -- # local node= 00:30:32.824 12:52:52 -- setup/common.sh@19 -- # local var val 00:30:32.824 12:52:52 -- setup/common.sh@20 -- # local mem_f mem 00:30:32.824 12:52:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:30:32.824 12:52:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:30:32.824 12:52:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:30:32.824 12:52:52 -- setup/common.sh@28 -- # mapfile -t mem 00:30:32.824 12:52:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:30:32.824 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.824 12:52:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7890764 kB' 'MemAvailable: 10502932 kB' 'Buffers: 2436 kB' 'Cached: 2814748 kB' 'SwapCached: 0 kB' 'Active: 492084 kB' 'Inactive: 2444476 kB' 'Active(anon): 129844 kB' 'Inactive(anon): 0 kB' 'Active(file): 362240 kB' 'Inactive(file): 2444476 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 120932 kB' 'Mapped: 49056 kB' 'Shmem: 10468 kB' 'KReclaimable: 84780 kB' 'Slab: 164588 kB' 'SReclaimable: 84780 kB' 'SUnreclaim: 79808 kB' 'KernelStack: 6660 kB' 'PageTables: 4288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 351704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54964 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:30:32.824 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.824 12:52:52 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:32.824 12:52:52 -- setup/common.sh@32 -- # continue 00:30:32.824 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.824 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.824 12:52:52 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:32.824 12:52:52 -- setup/common.sh@32 -- # continue 00:30:32.824 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.824 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.824 12:52:52 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:32.824 12:52:52 -- setup/common.sh@32 -- # continue 00:30:32.824 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.824 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.824 12:52:52 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:32.824 12:52:52 -- setup/common.sh@32 -- # continue 00:30:32.824 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.824 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.824 12:52:52 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:32.824 12:52:52 -- setup/common.sh@32 -- # continue 00:30:32.824 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.824 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.824 12:52:52 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:32.824 12:52:52 -- setup/common.sh@32 -- # continue 00:30:32.824 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.824 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.824 12:52:52 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:32.824 12:52:52 -- setup/common.sh@32 -- # continue 00:30:32.824 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.824 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.824 12:52:52 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:32.824 12:52:52 -- setup/common.sh@32 -- # continue 00:30:32.824 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.824 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.824 12:52:52 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:32.824 12:52:52 -- setup/common.sh@32 -- # continue 00:30:32.824 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.824 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.824 12:52:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:32.824 12:52:52 -- setup/common.sh@32 -- # continue 00:30:32.824 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.824 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.824 12:52:52 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:32.824 12:52:52 -- setup/common.sh@32 -- # continue 00:30:32.824 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.824 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.824 12:52:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:32.824 12:52:52 -- setup/common.sh@32 -- # continue 00:30:32.824 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.824 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.824 12:52:52 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:32.824 12:52:52 -- setup/common.sh@32 -- # continue 00:30:32.824 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.824 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.824 12:52:52 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:32.824 12:52:52 -- setup/common.sh@32 -- # continue 00:30:32.824 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.824 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.824 12:52:52 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:32.824 12:52:52 -- setup/common.sh@32 -- # continue 00:30:32.824 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.824 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.824 12:52:52 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:32.824 12:52:52 -- setup/common.sh@32 -- # continue 00:30:32.824 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.824 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.824 12:52:52 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:32.824 12:52:52 -- setup/common.sh@32 -- # continue 00:30:32.824 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.824 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.824 12:52:52 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:32.824 12:52:52 -- setup/common.sh@32 -- # continue 00:30:32.824 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.824 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.824 12:52:52 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:32.824 12:52:52 -- setup/common.sh@32 -- # continue 00:30:32.824 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.824 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.824 12:52:52 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:32.824 12:52:52 -- setup/common.sh@32 -- # continue 00:30:32.824 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.824 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.824 12:52:52 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:32.824 12:52:52 -- setup/common.sh@32 -- # continue 00:30:32.824 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.824 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.824 12:52:52 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:32.825 12:52:52 -- setup/common.sh@32 -- # continue 00:30:32.825 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.825 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.825 12:52:52 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:32.825 12:52:52 -- setup/common.sh@32 -- # continue 00:30:32.825 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.825 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.825 12:52:52 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:32.825 12:52:52 -- setup/common.sh@32 -- # continue 00:30:32.825 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.825 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.825 12:52:52 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:32.825 12:52:52 -- setup/common.sh@32 -- # continue 00:30:32.825 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.825 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.825 12:52:52 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:32.825 12:52:52 -- setup/common.sh@32 -- # continue 00:30:32.825 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.825 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.825 12:52:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:32.825 12:52:52 -- setup/common.sh@32 -- # continue 00:30:32.825 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.825 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.825 12:52:52 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:32.825 12:52:52 -- setup/common.sh@32 -- # continue 00:30:32.825 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.825 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.825 12:52:52 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:32.825 12:52:52 -- setup/common.sh@32 -- # continue 00:30:32.825 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.825 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.825 12:52:52 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:32.825 12:52:52 -- setup/common.sh@32 -- # continue 00:30:32.825 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.825 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.825 12:52:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:32.825 12:52:52 -- setup/common.sh@32 -- # continue 00:30:32.825 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.825 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.825 12:52:52 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:32.825 12:52:52 -- setup/common.sh@32 -- # continue 00:30:32.825 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.825 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.825 12:52:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:32.825 12:52:52 -- setup/common.sh@32 -- # continue 00:30:32.825 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.825 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.825 12:52:52 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:32.825 12:52:52 -- setup/common.sh@32 -- # continue 00:30:32.825 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.825 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.825 12:52:52 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:32.825 12:52:52 -- setup/common.sh@32 -- # continue 00:30:32.825 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.825 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.825 12:52:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:32.825 12:52:52 -- setup/common.sh@32 -- # continue 00:30:32.825 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.825 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.825 12:52:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:32.825 12:52:52 -- setup/common.sh@32 -- # continue 00:30:32.825 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.825 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.825 12:52:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:32.825 12:52:52 -- setup/common.sh@32 -- # continue 00:30:32.825 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.825 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.825 12:52:52 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:32.825 12:52:52 -- setup/common.sh@32 -- # continue 00:30:32.825 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.825 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.825 12:52:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:32.825 12:52:52 -- setup/common.sh@32 -- # continue 00:30:32.825 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.825 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.825 12:52:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:32.825 12:52:52 -- setup/common.sh@33 -- # echo 0 00:30:32.825 12:52:52 -- setup/common.sh@33 -- # return 0 00:30:32.825 12:52:52 -- setup/hugepages.sh@97 -- # anon=0 00:30:32.825 12:52:52 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:30:32.825 12:52:52 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:30:32.825 12:52:52 -- setup/common.sh@18 -- # local node= 00:30:32.825 12:52:52 -- setup/common.sh@19 -- # local var val 00:30:32.825 12:52:52 -- setup/common.sh@20 -- # local mem_f mem 00:30:32.825 12:52:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:30:32.825 12:52:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:30:32.825 12:52:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:30:32.825 12:52:52 -- setup/common.sh@28 -- # mapfile -t mem 00:30:32.825 12:52:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:30:32.825 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.825 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.825 12:52:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7890764 kB' 'MemAvailable: 10502932 kB' 'Buffers: 2436 kB' 'Cached: 2814748 kB' 'SwapCached: 0 kB' 'Active: 491624 kB' 'Inactive: 2444476 kB' 'Active(anon): 129384 kB' 'Inactive(anon): 0 kB' 'Active(file): 362240 kB' 'Inactive(file): 2444476 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 120424 kB' 'Mapped: 48928 kB' 'Shmem: 10468 kB' 'KReclaimable: 84780 kB' 'Slab: 164572 kB' 'SReclaimable: 84780 kB' 'SUnreclaim: 79792 kB' 'KernelStack: 6604 kB' 'PageTables: 4252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 351704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54932 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:30:32.825 12:52:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.825 12:52:52 -- setup/common.sh@32 -- # continue 00:30:32.825 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.825 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.825 12:52:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.825 12:52:52 -- setup/common.sh@32 -- # continue 00:30:32.825 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.825 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.825 12:52:52 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.825 12:52:52 -- setup/common.sh@32 -- # continue 00:30:32.825 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.825 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.825 12:52:52 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.825 12:52:52 -- setup/common.sh@32 -- # continue 00:30:32.825 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.825 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.825 12:52:52 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.825 12:52:52 -- setup/common.sh@32 -- # continue 00:30:32.825 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.825 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.825 12:52:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.825 12:52:52 -- setup/common.sh@32 -- # continue 00:30:32.825 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.825 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.825 12:52:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.825 12:52:52 -- setup/common.sh@32 -- # continue 00:30:32.825 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.825 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.825 12:52:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.825 12:52:52 -- setup/common.sh@32 -- # continue 00:30:32.825 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.825 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.825 12:52:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.825 12:52:52 -- setup/common.sh@32 -- # continue 00:30:32.825 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.825 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.825 12:52:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.825 12:52:52 -- setup/common.sh@32 -- # continue 00:30:32.825 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.825 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.826 12:52:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.826 12:52:52 -- setup/common.sh@32 -- # continue 00:30:32.826 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.826 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.826 12:52:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.826 12:52:52 -- setup/common.sh@32 -- # continue 00:30:32.826 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.826 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.826 12:52:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.826 12:52:52 -- setup/common.sh@32 -- # continue 00:30:32.826 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.826 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.826 12:52:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.826 12:52:52 -- setup/common.sh@32 -- # continue 00:30:32.826 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.826 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.826 12:52:52 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.826 12:52:52 -- setup/common.sh@32 -- # continue 00:30:32.826 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.826 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.826 12:52:52 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.826 12:52:52 -- setup/common.sh@32 -- # continue 00:30:32.826 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.826 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.826 12:52:52 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.826 12:52:52 -- setup/common.sh@32 -- # continue 00:30:32.826 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.826 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.826 12:52:52 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.826 12:52:52 -- setup/common.sh@32 -- # continue 00:30:32.826 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.826 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.826 12:52:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.826 12:52:52 -- setup/common.sh@32 -- # continue 00:30:32.826 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.826 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.826 12:52:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.826 12:52:52 -- setup/common.sh@32 -- # continue 00:30:32.826 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.826 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.826 12:52:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.826 12:52:52 -- setup/common.sh@32 -- # continue 00:30:32.826 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.826 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.826 12:52:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.826 12:52:52 -- setup/common.sh@32 -- # continue 00:30:32.826 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.826 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.826 12:52:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.826 12:52:52 -- setup/common.sh@32 -- # continue 00:30:32.826 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.826 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.826 12:52:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.826 12:52:52 -- setup/common.sh@32 -- # continue 00:30:32.826 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.826 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.826 12:52:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.826 12:52:52 -- setup/common.sh@32 -- # continue 00:30:32.826 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.826 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.826 12:52:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.826 12:52:52 -- setup/common.sh@32 -- # continue 00:30:32.826 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.826 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.826 12:52:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.826 12:52:52 -- setup/common.sh@32 -- # continue 00:30:32.826 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.826 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.826 12:52:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.826 12:52:52 -- setup/common.sh@32 -- # continue 00:30:32.826 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.826 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.826 12:52:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.826 12:52:52 -- setup/common.sh@32 -- # continue 00:30:32.826 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.826 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.826 12:52:52 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.826 12:52:52 -- setup/common.sh@32 -- # continue 00:30:32.826 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.826 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.826 12:52:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.826 12:52:52 -- setup/common.sh@32 -- # continue 00:30:32.826 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.826 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.826 12:52:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.826 12:52:52 -- setup/common.sh@32 -- # continue 00:30:32.826 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.826 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.826 12:52:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.826 12:52:52 -- setup/common.sh@32 -- # continue 00:30:32.826 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.826 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.826 12:52:52 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.826 12:52:52 -- setup/common.sh@32 -- # continue 00:30:32.826 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.826 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.826 12:52:52 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.826 12:52:52 -- setup/common.sh@32 -- # continue 00:30:32.826 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.826 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.826 12:52:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.826 12:52:52 -- setup/common.sh@32 -- # continue 00:30:32.826 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.826 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.826 12:52:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.826 12:52:52 -- setup/common.sh@32 -- # continue 00:30:32.826 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.826 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.826 12:52:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:32.826 12:52:52 -- setup/common.sh@32 -- # continue 00:30:32.826 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:32.826 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:32.826 12:52:52 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.086 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.086 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.086 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.086 12:52:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.086 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.086 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.086 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.086 12:52:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.086 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.086 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.086 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.086 12:52:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.086 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.086 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.086 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.086 12:52:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.086 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.086 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.086 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.086 12:52:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.086 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.086 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.086 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.086 12:52:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.086 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.086 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.086 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.086 12:52:52 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.086 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.086 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.086 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.086 12:52:52 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.086 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.086 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.086 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.086 12:52:52 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.086 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.086 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.086 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.086 12:52:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.086 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.086 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.086 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.086 12:52:52 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.086 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.086 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.086 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.086 12:52:52 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.086 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.086 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.086 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.086 12:52:52 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.086 12:52:52 -- setup/common.sh@33 -- # echo 0 00:30:33.086 12:52:52 -- setup/common.sh@33 -- # return 0 00:30:33.086 12:52:52 -- setup/hugepages.sh@99 -- # surp=0 00:30:33.086 12:52:52 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:30:33.086 12:52:52 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:30:33.086 12:52:52 -- setup/common.sh@18 -- # local node= 00:30:33.086 12:52:52 -- setup/common.sh@19 -- # local var val 00:30:33.086 12:52:52 -- setup/common.sh@20 -- # local mem_f mem 00:30:33.086 12:52:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:30:33.086 12:52:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:30:33.086 12:52:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:30:33.086 12:52:52 -- setup/common.sh@28 -- # mapfile -t mem 00:30:33.086 12:52:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:30:33.086 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.086 12:52:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7891400 kB' 'MemAvailable: 10503568 kB' 'Buffers: 2436 kB' 'Cached: 2814748 kB' 'SwapCached: 0 kB' 'Active: 491816 kB' 'Inactive: 2444476 kB' 'Active(anon): 129576 kB' 'Inactive(anon): 0 kB' 'Active(file): 362240 kB' 'Inactive(file): 2444476 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 120648 kB' 'Mapped: 48868 kB' 'Shmem: 10468 kB' 'KReclaimable: 84780 kB' 'Slab: 164572 kB' 'SReclaimable: 84780 kB' 'SUnreclaim: 79792 kB' 'KernelStack: 6604 kB' 'PageTables: 4260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 351704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54932 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:30:33.086 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.086 12:52:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:33.086 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.086 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.086 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.086 12:52:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:33.086 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.086 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.086 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.086 12:52:52 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:33.086 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.086 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.086 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.086 12:52:52 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:33.086 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.086 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.086 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.086 12:52:52 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:33.086 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.086 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.086 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.086 12:52:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:33.086 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.086 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.086 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.086 12:52:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:33.086 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.086 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.086 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.086 12:52:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:33.086 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.086 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.086 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.086 12:52:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:33.086 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.086 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.086 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.086 12:52:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:33.086 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.086 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.086 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.086 12:52:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:33.086 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.086 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.086 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.086 12:52:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:33.086 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.086 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.086 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.086 12:52:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:33.086 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.086 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.086 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.086 12:52:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:33.086 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.086 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.086 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.086 12:52:52 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:33.086 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.086 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.086 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.086 12:52:52 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:33.086 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.086 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.086 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.086 12:52:52 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:33.086 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.086 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.086 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.086 12:52:52 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:33.086 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.086 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.086 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.086 12:52:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:33.086 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.086 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.086 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.086 12:52:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:33.086 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.086 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.086 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.086 12:52:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:33.086 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.086 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.086 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.086 12:52:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:33.086 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.086 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.086 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.086 12:52:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:33.086 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.086 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.086 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.086 12:52:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:33.086 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.086 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.086 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.086 12:52:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:33.086 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.086 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.086 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.087 12:52:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:33.087 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.087 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.087 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.087 12:52:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:33.087 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.087 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.087 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.087 12:52:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:33.087 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.087 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.087 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.087 12:52:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:33.087 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.087 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.087 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.087 12:52:52 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:33.087 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.087 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.087 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.087 12:52:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:33.087 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.087 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.087 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.087 12:52:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:33.087 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.087 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.087 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.087 12:52:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:33.087 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.087 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.087 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.087 12:52:52 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:33.087 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.087 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.087 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.087 12:52:52 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:33.087 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.087 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.087 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.087 12:52:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:33.087 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.087 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.087 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.087 12:52:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:33.087 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.087 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.087 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.087 12:52:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:33.087 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.087 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.087 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.087 12:52:52 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:33.087 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.087 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.087 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.087 12:52:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:33.087 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.087 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.087 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.087 12:52:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:33.087 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.087 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.087 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.087 12:52:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:33.087 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.087 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.087 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.087 12:52:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:33.087 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.087 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.087 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.087 12:52:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:33.087 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.087 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.087 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.087 12:52:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:33.087 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.087 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.087 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.087 12:52:52 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:33.087 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.087 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.087 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.087 12:52:52 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:33.087 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.087 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.087 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.087 12:52:52 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:33.087 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.087 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.087 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.087 12:52:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:33.087 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.087 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.087 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.087 12:52:52 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:33.087 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.087 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.087 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.087 12:52:52 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:33.087 12:52:52 -- setup/common.sh@33 -- # echo 0 00:30:33.087 12:52:52 -- setup/common.sh@33 -- # return 0 00:30:33.087 nr_hugepages=512 00:30:33.087 resv_hugepages=0 00:30:33.087 surplus_hugepages=0 00:30:33.087 anon_hugepages=0 00:30:33.087 12:52:52 -- setup/hugepages.sh@100 -- # resv=0 00:30:33.087 12:52:52 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:30:33.087 12:52:52 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:30:33.087 12:52:52 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:30:33.087 12:52:52 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:30:33.087 12:52:52 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:30:33.087 12:52:52 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:30:33.087 12:52:52 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:30:33.087 12:52:52 -- setup/common.sh@17 -- # local get=HugePages_Total 00:30:33.087 12:52:52 -- setup/common.sh@18 -- # local node= 00:30:33.087 12:52:52 -- setup/common.sh@19 -- # local var val 00:30:33.087 12:52:52 -- setup/common.sh@20 -- # local mem_f mem 00:30:33.087 12:52:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:30:33.087 12:52:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:30:33.087 12:52:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:30:33.087 12:52:52 -- setup/common.sh@28 -- # mapfile -t mem 00:30:33.087 12:52:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:30:33.087 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.087 12:52:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7891400 kB' 'MemAvailable: 10503568 kB' 'Buffers: 2436 kB' 'Cached: 2814748 kB' 'SwapCached: 0 kB' 'Active: 491804 kB' 'Inactive: 2444476 kB' 'Active(anon): 129564 kB' 'Inactive(anon): 0 kB' 'Active(file): 362240 kB' 'Inactive(file): 2444476 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 20 kB' 'AnonPages: 120604 kB' 'Mapped: 48868 kB' 'Shmem: 10468 kB' 'KReclaimable: 84780 kB' 'Slab: 164568 kB' 'SReclaimable: 84780 kB' 'SUnreclaim: 79788 kB' 'KernelStack: 6604 kB' 'PageTables: 4256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 351704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54948 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:30:33.087 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.087 12:52:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:33.087 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.087 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.087 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.087 12:52:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:33.087 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.087 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.087 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.087 12:52:52 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:33.087 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.087 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.087 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.087 12:52:52 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:33.087 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.087 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.087 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.087 12:52:52 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:33.087 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.087 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.087 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.087 12:52:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:33.087 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.087 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.087 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.087 12:52:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:33.087 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.087 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.087 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.087 12:52:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:33.087 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.087 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.087 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.087 12:52:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:33.087 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.087 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.087 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.087 12:52:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:33.087 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.087 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.087 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.087 12:52:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:33.087 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.087 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.087 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.087 12:52:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:33.087 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.087 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.087 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.087 12:52:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:33.087 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.087 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.087 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.087 12:52:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:33.087 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.087 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.087 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.087 12:52:52 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:33.087 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.087 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.087 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.087 12:52:52 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:33.087 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.087 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.087 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.087 12:52:52 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:33.087 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.087 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.087 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.087 12:52:52 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:33.087 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.087 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.087 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.087 12:52:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:33.087 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.087 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.087 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.087 12:52:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:33.087 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.087 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.087 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.087 12:52:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:33.087 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.087 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.087 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.087 12:52:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:33.087 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.087 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.087 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.087 12:52:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:33.087 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.087 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.087 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.087 12:52:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:33.087 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.087 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.087 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.087 12:52:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:33.087 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.087 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.087 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.087 12:52:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:33.087 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.087 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.087 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.087 12:52:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:33.088 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.088 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.088 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.088 12:52:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:33.088 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.088 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.088 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.088 12:52:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:33.088 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.088 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.088 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.088 12:52:52 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:33.088 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.088 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.088 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.088 12:52:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:33.088 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.088 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.088 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.088 12:52:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:33.088 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.088 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.088 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.088 12:52:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:33.088 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.088 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.088 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.088 12:52:52 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:33.088 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.088 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.088 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.088 12:52:52 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:33.088 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.088 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.088 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.088 12:52:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:33.088 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.088 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.088 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.088 12:52:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:33.088 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.088 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.088 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.088 12:52:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:33.088 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.088 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.088 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.088 12:52:52 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:33.088 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.088 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.088 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.088 12:52:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:33.088 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.088 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.088 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.088 12:52:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:33.088 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.088 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.088 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.088 12:52:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:33.088 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.088 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.088 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.088 12:52:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:33.088 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.088 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.088 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.088 12:52:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:33.088 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.088 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.088 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.088 12:52:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:33.088 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.088 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.088 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.088 12:52:52 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:33.088 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.088 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.088 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.088 12:52:52 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:33.088 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.088 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.088 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.088 12:52:52 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:33.088 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.088 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.088 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.088 12:52:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:33.088 12:52:52 -- setup/common.sh@33 -- # echo 512 00:30:33.088 12:52:52 -- setup/common.sh@33 -- # return 0 00:30:33.088 12:52:52 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:30:33.088 12:52:52 -- setup/hugepages.sh@112 -- # get_nodes 00:30:33.088 12:52:52 -- setup/hugepages.sh@27 -- # local node 00:30:33.088 12:52:52 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:30:33.088 12:52:52 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:30:33.088 12:52:52 -- setup/hugepages.sh@32 -- # no_nodes=1 00:30:33.088 12:52:52 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:30:33.088 12:52:52 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:30:33.088 12:52:52 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:30:33.088 12:52:52 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:30:33.088 12:52:52 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:30:33.088 12:52:52 -- setup/common.sh@18 -- # local node=0 00:30:33.088 12:52:52 -- setup/common.sh@19 -- # local var val 00:30:33.088 12:52:52 -- setup/common.sh@20 -- # local mem_f mem 00:30:33.088 12:52:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:30:33.088 12:52:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:30:33.088 12:52:52 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:30:33.088 12:52:52 -- setup/common.sh@28 -- # mapfile -t mem 00:30:33.088 12:52:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:30:33.088 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.088 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.088 12:52:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7891400 kB' 'MemUsed: 4350580 kB' 'SwapCached: 0 kB' 'Active: 491804 kB' 'Inactive: 2444476 kB' 'Active(anon): 129564 kB' 'Inactive(anon): 0 kB' 'Active(file): 362240 kB' 'Inactive(file): 2444476 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 2817184 kB' 'Mapped: 48868 kB' 'AnonPages: 120644 kB' 'Shmem: 10468 kB' 'KernelStack: 6620 kB' 'PageTables: 4312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 84780 kB' 'Slab: 164568 kB' 'SReclaimable: 84780 kB' 'SUnreclaim: 79788 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:30:33.088 12:52:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.088 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.088 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.088 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.088 12:52:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.088 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.088 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.088 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.088 12:52:52 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.088 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.088 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.088 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.088 12:52:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.088 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.088 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.088 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.088 12:52:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.088 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.088 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.088 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.088 12:52:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.088 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.088 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.088 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.088 12:52:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.088 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.088 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.088 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.088 12:52:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.088 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.088 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.088 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.088 12:52:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.088 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.088 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.088 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.088 12:52:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.088 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.088 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.088 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.088 12:52:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.088 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.088 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.088 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.088 12:52:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.088 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.088 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.088 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.088 12:52:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.088 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.088 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.088 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.088 12:52:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.088 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.088 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.088 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.088 12:52:52 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.088 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.088 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.088 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.088 12:52:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.088 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.088 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.088 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.088 12:52:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.088 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.088 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.088 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.088 12:52:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.088 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.088 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.088 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.088 12:52:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.088 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.088 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.088 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.088 12:52:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.088 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.088 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.088 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.088 12:52:52 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.088 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.088 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.088 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.088 12:52:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.088 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.088 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.088 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.088 12:52:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.088 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.088 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.088 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.088 12:52:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.088 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.088 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.088 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.088 12:52:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.088 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.088 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.088 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.088 12:52:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.088 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.088 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.088 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.088 12:52:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.088 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.088 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.088 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.088 12:52:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.088 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.088 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.088 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.088 12:52:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.088 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.088 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.088 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.088 12:52:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.088 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.088 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.088 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.088 12:52:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.088 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.088 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.088 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.088 12:52:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.089 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.089 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.089 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.089 12:52:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.089 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.089 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.089 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.089 12:52:52 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.089 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.089 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.089 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.089 12:52:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.089 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.089 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.089 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.089 12:52:52 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.089 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.089 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.089 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.089 12:52:52 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.089 12:52:52 -- setup/common.sh@33 -- # echo 0 00:30:33.089 12:52:52 -- setup/common.sh@33 -- # return 0 00:30:33.089 node0=512 expecting 512 00:30:33.089 12:52:52 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:30:33.089 12:52:52 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:30:33.089 12:52:52 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:30:33.089 12:52:52 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:30:33.089 12:52:52 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:30:33.089 12:52:52 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:30:33.089 00:30:33.089 real 0m0.563s 00:30:33.089 user 0m0.286s 00:30:33.089 sys 0m0.284s 00:30:33.089 ************************************ 00:30:33.089 END TEST custom_alloc 00:30:33.089 ************************************ 00:30:33.089 12:52:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:33.089 12:52:52 -- common/autotest_common.sh@10 -- # set +x 00:30:33.089 12:52:52 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:30:33.089 12:52:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:33.089 12:52:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:33.089 12:52:52 -- common/autotest_common.sh@10 -- # set +x 00:30:33.089 ************************************ 00:30:33.089 START TEST no_shrink_alloc 00:30:33.089 ************************************ 00:30:33.089 12:52:52 -- common/autotest_common.sh@1104 -- # no_shrink_alloc 00:30:33.089 12:52:52 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:30:33.089 12:52:52 -- setup/hugepages.sh@49 -- # local size=2097152 00:30:33.089 12:52:52 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:30:33.089 12:52:52 -- setup/hugepages.sh@51 -- # shift 00:30:33.089 12:52:52 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:30:33.089 12:52:52 -- setup/hugepages.sh@52 -- # local node_ids 00:30:33.089 12:52:52 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:30:33.089 12:52:52 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:30:33.089 12:52:52 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:30:33.089 12:52:52 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:30:33.089 12:52:52 -- setup/hugepages.sh@62 -- # local user_nodes 00:30:33.089 12:52:52 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:30:33.089 12:52:52 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:30:33.089 12:52:52 -- setup/hugepages.sh@67 -- # nodes_test=() 00:30:33.089 12:52:52 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:30:33.089 12:52:52 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:30:33.089 12:52:52 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:30:33.089 12:52:52 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:30:33.089 12:52:52 -- setup/hugepages.sh@73 -- # return 0 00:30:33.089 12:52:52 -- setup/hugepages.sh@198 -- # setup output 00:30:33.089 12:52:52 -- setup/common.sh@9 -- # [[ output == output ]] 00:30:33.089 12:52:52 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:30:33.348 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:33.348 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:30:33.348 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:30:33.609 12:52:52 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:30:33.609 12:52:52 -- setup/hugepages.sh@89 -- # local node 00:30:33.609 12:52:52 -- setup/hugepages.sh@90 -- # local sorted_t 00:30:33.609 12:52:52 -- setup/hugepages.sh@91 -- # local sorted_s 00:30:33.609 12:52:52 -- setup/hugepages.sh@92 -- # local surp 00:30:33.609 12:52:52 -- setup/hugepages.sh@93 -- # local resv 00:30:33.609 12:52:52 -- setup/hugepages.sh@94 -- # local anon 00:30:33.609 12:52:52 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:30:33.609 12:52:52 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:30:33.609 12:52:52 -- setup/common.sh@17 -- # local get=AnonHugePages 00:30:33.609 12:52:52 -- setup/common.sh@18 -- # local node= 00:30:33.609 12:52:52 -- setup/common.sh@19 -- # local var val 00:30:33.609 12:52:52 -- setup/common.sh@20 -- # local mem_f mem 00:30:33.609 12:52:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:30:33.609 12:52:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:30:33.609 12:52:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:30:33.609 12:52:52 -- setup/common.sh@28 -- # mapfile -t mem 00:30:33.609 12:52:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:30:33.609 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.609 12:52:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6843648 kB' 'MemAvailable: 9455816 kB' 'Buffers: 2436 kB' 'Cached: 2814748 kB' 'SwapCached: 0 kB' 'Active: 492132 kB' 'Inactive: 2444476 kB' 'Active(anon): 129892 kB' 'Inactive(anon): 0 kB' 'Active(file): 362240 kB' 'Inactive(file): 2444476 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120804 kB' 'Mapped: 48804 kB' 'Shmem: 10468 kB' 'KReclaimable: 84780 kB' 'Slab: 164652 kB' 'SReclaimable: 84780 kB' 'SUnreclaim: 79872 kB' 'KernelStack: 6568 kB' 'PageTables: 4372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 351704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54964 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:30:33.609 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.609 12:52:52 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:33.609 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.609 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.609 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.609 12:52:52 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:33.609 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.609 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.609 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.609 12:52:52 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:33.609 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.609 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.609 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.609 12:52:52 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:33.609 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.609 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.609 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.609 12:52:52 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:33.609 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.609 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.609 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.609 12:52:52 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:33.609 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.609 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.609 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.609 12:52:52 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:33.609 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.609 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.609 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.609 12:52:52 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:33.609 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.609 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.609 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.609 12:52:52 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:33.609 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.609 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.609 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.609 12:52:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:33.609 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.609 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.609 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.609 12:52:52 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:33.609 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.609 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.609 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.609 12:52:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:33.609 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.609 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.609 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.609 12:52:52 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:33.609 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.609 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.609 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.609 12:52:52 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:33.609 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.609 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.609 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.609 12:52:52 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:33.609 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.609 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.609 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.609 12:52:52 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:33.610 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.610 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.610 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.610 12:52:52 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:33.610 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.610 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.610 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.610 12:52:52 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:33.610 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.610 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.610 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.610 12:52:52 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:33.610 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.610 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.610 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.610 12:52:52 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:33.610 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.610 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.610 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.610 12:52:52 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:33.610 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.610 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.610 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.610 12:52:52 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:33.610 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.610 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.610 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.610 12:52:52 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:33.610 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.610 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.610 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.610 12:52:52 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:33.610 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.610 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.610 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.610 12:52:52 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:33.610 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.610 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.610 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.610 12:52:52 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:33.610 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.610 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.610 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.610 12:52:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:33.610 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.610 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.610 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.610 12:52:52 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:33.610 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.610 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.610 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.610 12:52:52 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:33.610 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.610 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.610 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.610 12:52:52 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:33.610 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.610 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.610 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.610 12:52:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:33.610 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.610 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.610 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.610 12:52:52 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:33.610 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.610 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.610 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.610 12:52:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:33.610 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.610 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.610 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.610 12:52:52 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:33.610 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.610 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.610 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.610 12:52:52 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:33.610 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.610 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.610 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.610 12:52:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:33.610 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.610 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.610 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.610 12:52:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:33.610 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.610 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.610 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.610 12:52:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:33.610 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.610 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.610 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.610 12:52:52 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:33.610 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.610 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.610 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.610 12:52:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:33.610 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.610 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.610 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.610 12:52:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:33.610 12:52:52 -- setup/common.sh@33 -- # echo 0 00:30:33.610 12:52:52 -- setup/common.sh@33 -- # return 0 00:30:33.610 12:52:52 -- setup/hugepages.sh@97 -- # anon=0 00:30:33.610 12:52:52 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:30:33.610 12:52:52 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:30:33.610 12:52:52 -- setup/common.sh@18 -- # local node= 00:30:33.610 12:52:52 -- setup/common.sh@19 -- # local var val 00:30:33.610 12:52:52 -- setup/common.sh@20 -- # local mem_f mem 00:30:33.610 12:52:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:30:33.610 12:52:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:30:33.610 12:52:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:30:33.610 12:52:52 -- setup/common.sh@28 -- # mapfile -t mem 00:30:33.610 12:52:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:30:33.610 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.610 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.610 12:52:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6843916 kB' 'MemAvailable: 9456084 kB' 'Buffers: 2436 kB' 'Cached: 2814748 kB' 'SwapCached: 0 kB' 'Active: 491868 kB' 'Inactive: 2444476 kB' 'Active(anon): 129628 kB' 'Inactive(anon): 0 kB' 'Active(file): 362240 kB' 'Inactive(file): 2444476 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120736 kB' 'Mapped: 48820 kB' 'Shmem: 10468 kB' 'KReclaimable: 84780 kB' 'Slab: 164660 kB' 'SReclaimable: 84780 kB' 'SUnreclaim: 79880 kB' 'KernelStack: 6568 kB' 'PageTables: 4368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 351704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54932 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:30:33.610 12:52:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.610 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.610 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.610 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.610 12:52:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.610 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.610 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.610 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.610 12:52:52 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.610 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.610 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.610 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.610 12:52:52 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.610 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.610 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.610 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.610 12:52:52 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.610 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.610 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.610 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.610 12:52:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.610 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.610 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.611 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.611 12:52:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.611 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.611 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.611 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.611 12:52:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.611 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.611 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.611 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.611 12:52:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.611 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.611 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.611 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.611 12:52:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.611 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.611 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.611 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.611 12:52:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.611 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.611 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.611 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.611 12:52:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.611 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.611 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.611 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.611 12:52:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.611 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.611 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.611 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.611 12:52:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.611 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.611 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.611 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.611 12:52:52 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.611 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.611 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.611 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.611 12:52:52 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.611 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.611 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.611 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.611 12:52:52 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.611 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.611 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.611 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.611 12:52:52 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.611 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.611 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.611 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.611 12:52:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.611 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.611 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.611 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.611 12:52:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.611 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.611 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.611 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.611 12:52:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.611 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.611 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.611 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.611 12:52:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.611 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.611 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.611 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.611 12:52:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.611 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.611 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.611 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.611 12:52:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.611 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.611 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.611 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.611 12:52:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.611 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.611 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.611 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.611 12:52:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.611 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.611 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.611 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.611 12:52:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.611 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.611 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.611 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.611 12:52:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.611 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.611 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.611 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.611 12:52:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.611 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.611 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.611 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.611 12:52:52 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.611 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.611 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.611 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.611 12:52:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.611 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.611 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.611 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.611 12:52:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.611 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.611 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.611 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.611 12:52:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.611 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.611 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.611 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.611 12:52:52 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.611 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.611 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.611 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.611 12:52:52 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.611 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.611 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.611 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.611 12:52:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.611 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.611 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.611 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.611 12:52:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.611 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.611 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.611 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.611 12:52:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.611 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.611 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.611 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.611 12:52:52 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.611 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.611 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.611 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.611 12:52:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.611 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.611 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.611 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.611 12:52:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.611 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.611 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.611 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.611 12:52:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.611 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.611 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.611 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.611 12:52:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.611 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.611 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.611 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.611 12:52:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.611 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.611 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.611 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.612 12:52:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.612 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.612 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.612 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.612 12:52:52 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.612 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.612 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.612 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.612 12:52:52 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.612 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.612 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.612 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.612 12:52:52 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.612 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.612 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.612 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.612 12:52:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.612 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.612 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.612 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.612 12:52:52 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.612 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.612 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.612 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.612 12:52:52 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.612 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.612 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.612 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.612 12:52:52 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.612 12:52:52 -- setup/common.sh@33 -- # echo 0 00:30:33.612 12:52:52 -- setup/common.sh@33 -- # return 0 00:30:33.612 12:52:52 -- setup/hugepages.sh@99 -- # surp=0 00:30:33.612 12:52:52 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:30:33.612 12:52:52 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:30:33.612 12:52:52 -- setup/common.sh@18 -- # local node= 00:30:33.612 12:52:52 -- setup/common.sh@19 -- # local var val 00:30:33.612 12:52:52 -- setup/common.sh@20 -- # local mem_f mem 00:30:33.612 12:52:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:30:33.612 12:52:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:30:33.612 12:52:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:30:33.612 12:52:52 -- setup/common.sh@28 -- # mapfile -t mem 00:30:33.612 12:52:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:30:33.612 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.612 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.612 12:52:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6844108 kB' 'MemAvailable: 9456276 kB' 'Buffers: 2436 kB' 'Cached: 2814748 kB' 'SwapCached: 0 kB' 'Active: 491740 kB' 'Inactive: 2444476 kB' 'Active(anon): 129500 kB' 'Inactive(anon): 0 kB' 'Active(file): 362240 kB' 'Inactive(file): 2444476 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120648 kB' 'Mapped: 48820 kB' 'Shmem: 10468 kB' 'KReclaimable: 84780 kB' 'Slab: 164648 kB' 'SReclaimable: 84780 kB' 'SUnreclaim: 79868 kB' 'KernelStack: 6552 kB' 'PageTables: 4312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 354708 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54916 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:30:33.612 12:52:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:33.612 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.612 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.612 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.612 12:52:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:33.612 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.612 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.612 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.612 12:52:52 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:33.612 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.612 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.612 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.612 12:52:52 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:33.612 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.612 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.612 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.612 12:52:52 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:33.612 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.612 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.612 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.612 12:52:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:33.612 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.612 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.612 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.612 12:52:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:33.612 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.612 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.612 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.612 12:52:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:33.612 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.612 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.612 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.612 12:52:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:33.612 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.612 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.612 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.612 12:52:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:33.612 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.612 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.612 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.612 12:52:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:33.612 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.612 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.612 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.612 12:52:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:33.612 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.612 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.612 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.612 12:52:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:33.612 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.612 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.612 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.612 12:52:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:33.612 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.612 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.612 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.612 12:52:52 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:33.612 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.612 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.612 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.612 12:52:52 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:33.612 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.612 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.612 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.612 12:52:52 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:33.612 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.612 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.612 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.612 12:52:52 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:33.612 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.612 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.612 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.612 12:52:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:33.612 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.612 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.612 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.612 12:52:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:33.612 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.612 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.612 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.612 12:52:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:33.612 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.612 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.612 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.612 12:52:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:33.612 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.612 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.612 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.612 12:52:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:33.612 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.613 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.613 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.613 12:52:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:33.613 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.613 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.613 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.613 12:52:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:33.613 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.613 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.613 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.613 12:52:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:33.613 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.613 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.613 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.613 12:52:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:33.613 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.613 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.613 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.613 12:52:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:33.613 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.613 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.613 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.613 12:52:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:33.613 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.613 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.613 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.613 12:52:52 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:33.613 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.613 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.613 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.613 12:52:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:33.613 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.613 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.613 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.613 12:52:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:33.613 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.613 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.613 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.613 12:52:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:33.613 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.613 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.613 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.613 12:52:52 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:33.613 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.613 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.613 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.613 12:52:52 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:33.613 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.613 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.613 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.613 12:52:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:33.613 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.613 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.613 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.613 12:52:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:33.613 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.613 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.613 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.613 12:52:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:33.613 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.613 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.613 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.613 12:52:52 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:33.613 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.613 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.613 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.613 12:52:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:33.613 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.613 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.613 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.613 12:52:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:33.613 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.613 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.613 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.613 12:52:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:33.613 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.613 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.613 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.613 12:52:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:33.613 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.613 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.613 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.613 12:52:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:33.613 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.613 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.613 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.613 12:52:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:33.613 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.613 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.613 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.613 12:52:52 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:33.613 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.613 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.613 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.613 12:52:52 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:33.613 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.613 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.613 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.613 12:52:52 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:33.613 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.613 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.613 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.613 12:52:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:33.613 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.613 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.613 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.613 12:52:52 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:33.613 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.613 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.613 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.613 12:52:52 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:33.613 12:52:52 -- setup/common.sh@33 -- # echo 0 00:30:33.613 12:52:52 -- setup/common.sh@33 -- # return 0 00:30:33.613 nr_hugepages=1024 00:30:33.613 resv_hugepages=0 00:30:33.613 surplus_hugepages=0 00:30:33.613 anon_hugepages=0 00:30:33.613 12:52:52 -- setup/hugepages.sh@100 -- # resv=0 00:30:33.613 12:52:52 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:30:33.613 12:52:52 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:30:33.613 12:52:52 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:30:33.613 12:52:52 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:30:33.613 12:52:52 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:30:33.613 12:52:52 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:30:33.613 12:52:52 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:30:33.613 12:52:52 -- setup/common.sh@17 -- # local get=HugePages_Total 00:30:33.613 12:52:52 -- setup/common.sh@18 -- # local node= 00:30:33.613 12:52:52 -- setup/common.sh@19 -- # local var val 00:30:33.613 12:52:52 -- setup/common.sh@20 -- # local mem_f mem 00:30:33.613 12:52:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:30:33.613 12:52:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:30:33.613 12:52:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:30:33.613 12:52:52 -- setup/common.sh@28 -- # mapfile -t mem 00:30:33.613 12:52:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:30:33.613 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.613 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.613 12:52:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6844108 kB' 'MemAvailable: 9456276 kB' 'Buffers: 2436 kB' 'Cached: 2814748 kB' 'SwapCached: 0 kB' 'Active: 491740 kB' 'Inactive: 2444476 kB' 'Active(anon): 129500 kB' 'Inactive(anon): 0 kB' 'Active(file): 362240 kB' 'Inactive(file): 2444476 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120776 kB' 'Mapped: 48820 kB' 'Shmem: 10468 kB' 'KReclaimable: 84780 kB' 'Slab: 164648 kB' 'SReclaimable: 84780 kB' 'SUnreclaim: 79868 kB' 'KernelStack: 6552 kB' 'PageTables: 4320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 351704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:30:33.613 12:52:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:33.613 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.613 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.614 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.614 12:52:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:33.614 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.614 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.614 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.614 12:52:52 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:33.614 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.614 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.614 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.614 12:52:52 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:33.614 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.614 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.614 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.614 12:52:52 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:33.614 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.614 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.614 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.614 12:52:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:33.614 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.614 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.614 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.614 12:52:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:33.614 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.614 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.614 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.614 12:52:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:33.614 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.614 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.614 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.614 12:52:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:33.614 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.614 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.614 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.614 12:52:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:33.614 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.614 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.614 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.614 12:52:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:33.614 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.614 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.614 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.614 12:52:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:33.614 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.614 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.614 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.614 12:52:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:33.614 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.614 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.614 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.614 12:52:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:33.614 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.614 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.614 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.614 12:52:52 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:33.614 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.614 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.614 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.614 12:52:52 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:33.614 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.614 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.614 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.614 12:52:52 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:33.614 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.614 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.614 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.614 12:52:52 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:33.614 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.614 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.614 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.614 12:52:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:33.614 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.614 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.614 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.614 12:52:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:33.614 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.614 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.614 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.614 12:52:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:33.614 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.614 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.614 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.614 12:52:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:33.614 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.614 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.614 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.614 12:52:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:33.614 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.614 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.614 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.614 12:52:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:33.614 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.614 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.614 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.614 12:52:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:33.614 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.614 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.614 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.614 12:52:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:33.614 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.614 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.614 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.614 12:52:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:33.614 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.614 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.614 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.614 12:52:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:33.615 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.615 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.615 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.615 12:52:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:33.615 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.615 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.615 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.615 12:52:52 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:33.615 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.615 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.615 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.615 12:52:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:33.615 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.615 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.615 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.615 12:52:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:33.615 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.615 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.615 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.615 12:52:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:33.615 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.615 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.615 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.615 12:52:52 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:33.615 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.615 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.615 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.615 12:52:52 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:33.615 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.615 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.615 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.615 12:52:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:33.615 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.615 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.615 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.615 12:52:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:33.615 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.615 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.615 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.615 12:52:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:33.615 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.615 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.615 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.615 12:52:52 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:33.615 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.615 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.615 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.615 12:52:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:33.615 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.615 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.615 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.615 12:52:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:33.615 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.615 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.615 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.615 12:52:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:33.615 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.615 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.615 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.615 12:52:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:33.615 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.615 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.615 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.615 12:52:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:33.615 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.615 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.615 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.615 12:52:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:33.615 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.615 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.615 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.615 12:52:52 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:33.615 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.615 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.615 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.615 12:52:52 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:33.615 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.615 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.615 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.615 12:52:52 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:33.615 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.615 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.615 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.615 12:52:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:33.615 12:52:52 -- setup/common.sh@33 -- # echo 1024 00:30:33.615 12:52:52 -- setup/common.sh@33 -- # return 0 00:30:33.615 12:52:52 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:30:33.615 12:52:52 -- setup/hugepages.sh@112 -- # get_nodes 00:30:33.615 12:52:52 -- setup/hugepages.sh@27 -- # local node 00:30:33.615 12:52:52 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:30:33.615 12:52:52 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:30:33.615 12:52:52 -- setup/hugepages.sh@32 -- # no_nodes=1 00:30:33.615 12:52:52 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:30:33.615 12:52:52 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:30:33.615 12:52:52 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:30:33.615 12:52:52 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:30:33.615 12:52:52 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:30:33.615 12:52:52 -- setup/common.sh@18 -- # local node=0 00:30:33.615 12:52:52 -- setup/common.sh@19 -- # local var val 00:30:33.615 12:52:52 -- setup/common.sh@20 -- # local mem_f mem 00:30:33.615 12:52:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:30:33.615 12:52:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:30:33.615 12:52:52 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:30:33.615 12:52:52 -- setup/common.sh@28 -- # mapfile -t mem 00:30:33.615 12:52:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:30:33.615 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.615 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.615 12:52:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6844728 kB' 'MemUsed: 5397252 kB' 'SwapCached: 0 kB' 'Active: 491596 kB' 'Inactive: 2444476 kB' 'Active(anon): 129356 kB' 'Inactive(anon): 0 kB' 'Active(file): 362240 kB' 'Inactive(file): 2444476 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 2817184 kB' 'Mapped: 48820 kB' 'AnonPages: 120544 kB' 'Shmem: 10468 kB' 'KernelStack: 6552 kB' 'PageTables: 4312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 84780 kB' 'Slab: 164648 kB' 'SReclaimable: 84780 kB' 'SUnreclaim: 79868 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:30:33.615 12:52:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.615 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.615 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.615 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.615 12:52:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.615 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.615 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.615 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.615 12:52:52 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.615 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.615 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.615 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.615 12:52:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.615 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.615 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.615 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.615 12:52:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.615 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.615 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.615 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.615 12:52:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.615 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.615 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.615 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.615 12:52:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.615 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.615 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.615 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.615 12:52:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.615 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.615 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.616 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.616 12:52:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.616 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.616 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.616 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.616 12:52:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.616 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.616 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.616 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.616 12:52:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.616 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.616 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.616 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.616 12:52:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.616 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.616 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.616 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.616 12:52:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.616 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.616 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.616 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.616 12:52:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.616 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.616 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.616 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.616 12:52:52 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.616 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.616 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.616 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.616 12:52:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.616 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.616 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.616 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.616 12:52:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.616 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.616 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.616 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.616 12:52:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.616 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.616 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.616 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.616 12:52:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.616 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.616 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.616 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.616 12:52:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.616 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.616 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.616 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.616 12:52:52 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.616 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.616 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.616 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.616 12:52:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.616 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.616 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.616 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.616 12:52:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.616 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.616 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.616 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.616 12:52:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.616 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.616 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.616 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.616 12:52:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.616 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.616 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.616 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.616 12:52:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.616 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.616 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.616 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.616 12:52:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.616 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.616 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.616 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.616 12:52:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.616 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.616 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.616 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.616 12:52:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.616 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.616 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.616 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.616 12:52:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.616 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.616 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.616 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.616 12:52:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.616 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.616 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.616 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.616 12:52:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.616 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.616 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.616 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.616 12:52:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.616 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.616 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.616 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.616 12:52:52 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.616 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.616 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.616 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.616 12:52:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.616 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.616 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.616 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.616 12:52:52 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.616 12:52:52 -- setup/common.sh@32 -- # continue 00:30:33.616 12:52:52 -- setup/common.sh@31 -- # IFS=': ' 00:30:33.616 12:52:52 -- setup/common.sh@31 -- # read -r var val _ 00:30:33.616 12:52:52 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:33.616 12:52:52 -- setup/common.sh@33 -- # echo 0 00:30:33.616 12:52:52 -- setup/common.sh@33 -- # return 0 00:30:33.616 12:52:52 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:30:33.616 12:52:52 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:30:33.616 12:52:52 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:30:33.616 12:52:52 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:30:33.616 12:52:52 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:30:33.616 node0=1024 expecting 1024 00:30:33.616 12:52:52 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:30:33.616 12:52:52 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:30:33.616 12:52:52 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:30:33.616 12:52:52 -- setup/hugepages.sh@202 -- # setup output 00:30:33.616 12:52:52 -- setup/common.sh@9 -- # [[ output == output ]] 00:30:33.616 12:52:52 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:30:33.876 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:33.876 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:30:33.876 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:30:34.195 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:30:34.195 12:52:53 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:30:34.195 12:52:53 -- setup/hugepages.sh@89 -- # local node 00:30:34.195 12:52:53 -- setup/hugepages.sh@90 -- # local sorted_t 00:30:34.195 12:52:53 -- setup/hugepages.sh@91 -- # local sorted_s 00:30:34.195 12:52:53 -- setup/hugepages.sh@92 -- # local surp 00:30:34.195 12:52:53 -- setup/hugepages.sh@93 -- # local resv 00:30:34.195 12:52:53 -- setup/hugepages.sh@94 -- # local anon 00:30:34.195 12:52:53 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:30:34.195 12:52:53 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:30:34.195 12:52:53 -- setup/common.sh@17 -- # local get=AnonHugePages 00:30:34.195 12:52:53 -- setup/common.sh@18 -- # local node= 00:30:34.195 12:52:53 -- setup/common.sh@19 -- # local var val 00:30:34.195 12:52:53 -- setup/common.sh@20 -- # local mem_f mem 00:30:34.195 12:52:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:30:34.195 12:52:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:30:34.195 12:52:53 -- setup/common.sh@25 -- # [[ -n '' ]] 00:30:34.195 12:52:53 -- setup/common.sh@28 -- # mapfile -t mem 00:30:34.195 12:52:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:30:34.195 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.195 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.195 12:52:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6841212 kB' 'MemAvailable: 9453380 kB' 'Buffers: 2436 kB' 'Cached: 2814748 kB' 'SwapCached: 0 kB' 'Active: 487836 kB' 'Inactive: 2444476 kB' 'Active(anon): 125596 kB' 'Inactive(anon): 0 kB' 'Active(file): 362240 kB' 'Inactive(file): 2444476 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 116756 kB' 'Mapped: 48340 kB' 'Shmem: 10468 kB' 'KReclaimable: 84780 kB' 'Slab: 164640 kB' 'SReclaimable: 84780 kB' 'SUnreclaim: 79860 kB' 'KernelStack: 6648 kB' 'PageTables: 4368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 334232 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54916 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:30:34.195 12:52:53 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:34.195 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.195 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.195 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.195 12:52:53 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:34.195 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.195 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.196 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.196 12:52:53 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:34.196 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.196 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.196 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.196 12:52:53 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:34.196 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.196 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.196 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.196 12:52:53 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:34.196 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.196 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.196 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.196 12:52:53 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:34.196 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.196 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.196 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.196 12:52:53 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:34.196 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.196 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.196 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.196 12:52:53 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:34.196 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.196 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.196 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.196 12:52:53 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:34.196 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.196 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.196 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.196 12:52:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:34.196 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.196 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.196 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.196 12:52:53 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:34.196 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.196 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.196 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.196 12:52:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:34.196 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.196 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.196 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.196 12:52:53 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:34.196 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.196 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.196 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.196 12:52:53 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:34.196 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.196 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.196 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.196 12:52:53 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:34.196 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.196 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.196 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.196 12:52:53 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:34.196 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.196 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.196 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.196 12:52:53 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:34.196 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.196 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.196 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.196 12:52:53 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:34.196 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.196 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.196 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.196 12:52:53 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:34.196 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.196 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.196 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.196 12:52:53 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:34.196 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.196 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.196 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.196 12:52:53 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:34.196 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.196 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.196 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.196 12:52:53 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:34.196 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.196 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.196 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.196 12:52:53 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:34.196 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.196 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.196 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.196 12:52:53 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:34.196 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.196 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.196 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.196 12:52:53 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:34.196 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.196 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.196 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.196 12:52:53 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:34.196 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.196 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.196 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.196 12:52:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:34.196 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.196 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.196 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.196 12:52:53 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:34.196 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.196 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.196 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.196 12:52:53 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:34.196 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.196 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.196 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.196 12:52:53 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:34.196 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.196 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.196 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.196 12:52:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:34.196 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.196 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.196 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.196 12:52:53 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:34.196 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.196 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.196 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.196 12:52:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:34.196 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.196 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.196 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.196 12:52:53 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:34.196 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.196 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.196 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.196 12:52:53 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:34.196 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.196 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.196 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.196 12:52:53 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:34.196 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.196 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.196 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.196 12:52:53 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:34.196 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.196 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.196 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.196 12:52:53 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:34.196 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.196 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.196 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.196 12:52:53 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:34.196 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.196 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.196 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.196 12:52:53 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:34.196 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.197 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.197 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.197 12:52:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:34.197 12:52:53 -- setup/common.sh@33 -- # echo 0 00:30:34.197 12:52:53 -- setup/common.sh@33 -- # return 0 00:30:34.197 12:52:53 -- setup/hugepages.sh@97 -- # anon=0 00:30:34.197 12:52:53 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:30:34.197 12:52:53 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:30:34.197 12:52:53 -- setup/common.sh@18 -- # local node= 00:30:34.197 12:52:53 -- setup/common.sh@19 -- # local var val 00:30:34.197 12:52:53 -- setup/common.sh@20 -- # local mem_f mem 00:30:34.197 12:52:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:30:34.197 12:52:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:30:34.197 12:52:53 -- setup/common.sh@25 -- # [[ -n '' ]] 00:30:34.197 12:52:53 -- setup/common.sh@28 -- # mapfile -t mem 00:30:34.197 12:52:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:30:34.197 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.197 12:52:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6842004 kB' 'MemAvailable: 9454168 kB' 'Buffers: 2436 kB' 'Cached: 2814748 kB' 'SwapCached: 0 kB' 'Active: 486808 kB' 'Inactive: 2444476 kB' 'Active(anon): 124568 kB' 'Inactive(anon): 0 kB' 'Active(file): 362240 kB' 'Inactive(file): 2444476 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 115632 kB' 'Mapped: 47952 kB' 'Shmem: 10468 kB' 'KReclaimable: 84772 kB' 'Slab: 164600 kB' 'SReclaimable: 84772 kB' 'SUnreclaim: 79828 kB' 'KernelStack: 6464 kB' 'PageTables: 3832 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 334232 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:30:34.197 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.197 12:52:53 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:34.197 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.197 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.197 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.197 12:52:53 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:34.197 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.197 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.197 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.197 12:52:53 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:34.197 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.197 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.197 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.197 12:52:53 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:34.197 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.197 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.197 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.197 12:52:53 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:34.197 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.197 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.197 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.197 12:52:53 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:34.197 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.197 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.197 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.197 12:52:53 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:34.197 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.197 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.197 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.197 12:52:53 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:34.197 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.197 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.197 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.197 12:52:53 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:34.197 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.197 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.197 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.197 12:52:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:34.197 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.197 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.197 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.197 12:52:53 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:34.197 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.197 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.197 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.197 12:52:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:34.197 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.197 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.197 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.197 12:52:53 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:34.197 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.197 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.197 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.197 12:52:53 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:34.197 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.197 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.197 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.197 12:52:53 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:34.197 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.197 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.197 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.197 12:52:53 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:34.197 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.197 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.197 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.197 12:52:53 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:34.197 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.197 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.197 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.197 12:52:53 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:34.197 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.197 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.197 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.197 12:52:53 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:34.197 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.197 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.197 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.197 12:52:53 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:34.197 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.197 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.197 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.197 12:52:53 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:34.197 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.197 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.197 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.197 12:52:53 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:34.197 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.197 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.197 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.197 12:52:53 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:34.197 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.197 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.197 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.197 12:52:53 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:34.197 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.197 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.197 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.197 12:52:53 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:34.197 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.197 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.197 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.197 12:52:53 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:34.197 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.197 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.197 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.197 12:52:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:34.197 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.197 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.197 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.197 12:52:53 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:34.197 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.197 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.197 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.198 12:52:53 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:34.198 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.198 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.198 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.198 12:52:53 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:34.198 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.198 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.198 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.198 12:52:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:34.198 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.198 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.198 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.198 12:52:53 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:34.198 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.198 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.198 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.198 12:52:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:34.198 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.198 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.198 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.198 12:52:53 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:34.198 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.198 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.198 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.198 12:52:53 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:34.198 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.198 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.198 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.198 12:52:53 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:34.198 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.198 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.198 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.198 12:52:53 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:34.198 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.198 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.198 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.198 12:52:53 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:34.198 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.198 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.198 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.198 12:52:53 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:34.198 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.198 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.198 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.198 12:52:53 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:34.198 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.198 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.198 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.198 12:52:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:34.198 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.198 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.198 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.198 12:52:53 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:34.198 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.198 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.198 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.198 12:52:53 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:34.198 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.198 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.198 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.198 12:52:53 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:34.198 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.198 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.198 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.198 12:52:53 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:34.198 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.198 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.198 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.198 12:52:53 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:34.198 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.198 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.198 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.198 12:52:53 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:34.198 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.198 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.198 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.198 12:52:53 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:34.198 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.198 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.198 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.198 12:52:53 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:34.198 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.198 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.198 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.198 12:52:53 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:34.198 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.198 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.198 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.198 12:52:53 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:34.198 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.198 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.198 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.198 12:52:53 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:34.198 12:52:53 -- setup/common.sh@33 -- # echo 0 00:30:34.198 12:52:53 -- setup/common.sh@33 -- # return 0 00:30:34.198 12:52:53 -- setup/hugepages.sh@99 -- # surp=0 00:30:34.198 12:52:53 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:30:34.198 12:52:53 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:30:34.198 12:52:53 -- setup/common.sh@18 -- # local node= 00:30:34.198 12:52:53 -- setup/common.sh@19 -- # local var val 00:30:34.198 12:52:53 -- setup/common.sh@20 -- # local mem_f mem 00:30:34.198 12:52:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:30:34.198 12:52:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:30:34.198 12:52:53 -- setup/common.sh@25 -- # [[ -n '' ]] 00:30:34.198 12:52:53 -- setup/common.sh@28 -- # mapfile -t mem 00:30:34.198 12:52:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:30:34.198 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.198 12:52:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6842004 kB' 'MemAvailable: 9454168 kB' 'Buffers: 2436 kB' 'Cached: 2814748 kB' 'SwapCached: 0 kB' 'Active: 486868 kB' 'Inactive: 2444476 kB' 'Active(anon): 124628 kB' 'Inactive(anon): 0 kB' 'Active(file): 362240 kB' 'Inactive(file): 2444476 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 115744 kB' 'Mapped: 47952 kB' 'Shmem: 10468 kB' 'KReclaimable: 84772 kB' 'Slab: 164552 kB' 'SReclaimable: 84772 kB' 'SUnreclaim: 79780 kB' 'KernelStack: 6480 kB' 'PageTables: 3828 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 334232 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54820 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:30:34.198 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.198 12:52:53 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:34.198 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.198 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.198 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.198 12:52:53 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:34.198 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.198 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.198 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.198 12:52:53 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:34.198 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.198 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.198 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.198 12:52:53 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:34.198 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.198 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.198 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.198 12:52:53 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:34.198 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.198 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.198 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.198 12:52:53 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:34.198 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.198 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.198 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.198 12:52:53 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:34.198 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.198 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.198 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.198 12:52:53 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:34.198 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.199 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.199 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.199 12:52:53 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:34.199 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.199 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.199 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.199 12:52:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:34.199 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.199 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.199 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.199 12:52:53 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:34.199 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.199 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.199 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.199 12:52:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:34.199 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.199 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.199 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.199 12:52:53 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:34.199 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.199 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.199 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.199 12:52:53 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:34.199 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.199 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.199 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.199 12:52:53 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:34.199 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.199 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.199 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.199 12:52:53 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:34.199 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.199 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.199 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.199 12:52:53 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:34.199 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.199 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.199 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.199 12:52:53 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:34.199 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.199 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.199 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.199 12:52:53 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:34.199 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.199 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.199 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.199 12:52:53 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:34.199 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.199 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.199 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.199 12:52:53 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:34.199 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.199 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.199 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.199 12:52:53 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:34.199 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.199 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.199 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.199 12:52:53 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:34.199 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.199 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.199 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.199 12:52:53 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:34.199 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.199 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.199 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.199 12:52:53 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:34.199 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.199 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.199 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.199 12:52:53 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:34.199 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.199 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.199 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.199 12:52:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:34.199 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.199 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.199 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.199 12:52:53 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:34.199 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.199 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.199 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.199 12:52:53 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:34.199 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.199 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.199 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.199 12:52:53 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:34.199 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.199 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.199 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.199 12:52:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:34.199 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.199 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.199 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.199 12:52:53 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:34.199 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.199 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.199 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.199 12:52:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:34.199 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.199 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.199 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.199 12:52:53 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:34.199 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.199 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.199 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.199 12:52:53 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:34.199 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.199 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.199 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.199 12:52:53 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:34.199 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.199 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.199 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.199 12:52:53 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:34.199 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.199 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.199 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.199 12:52:53 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:34.199 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.199 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.199 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.199 12:52:53 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:34.199 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.199 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.199 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.199 12:52:53 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:34.199 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.199 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.199 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.199 12:52:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:34.199 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.199 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.199 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.199 12:52:53 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:34.199 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.200 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.200 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.200 12:52:53 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:34.200 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.200 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.200 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.200 12:52:53 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:34.200 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.200 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.200 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.200 12:52:53 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:34.200 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.200 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.200 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.200 12:52:53 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:34.200 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.200 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.200 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.200 12:52:53 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:34.200 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.200 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.200 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.200 12:52:53 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:34.200 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.200 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.200 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.200 12:52:53 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:34.200 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.200 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.200 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.200 12:52:53 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:34.200 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.200 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.200 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.200 12:52:53 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:34.200 12:52:53 -- setup/common.sh@33 -- # echo 0 00:30:34.200 12:52:53 -- setup/common.sh@33 -- # return 0 00:30:34.200 nr_hugepages=1024 00:30:34.200 resv_hugepages=0 00:30:34.200 surplus_hugepages=0 00:30:34.200 12:52:53 -- setup/hugepages.sh@100 -- # resv=0 00:30:34.200 12:52:53 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:30:34.200 12:52:53 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:30:34.200 12:52:53 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:30:34.200 anon_hugepages=0 00:30:34.200 12:52:53 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:30:34.200 12:52:53 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:30:34.200 12:52:53 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:30:34.200 12:52:53 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:30:34.200 12:52:53 -- setup/common.sh@17 -- # local get=HugePages_Total 00:30:34.200 12:52:53 -- setup/common.sh@18 -- # local node= 00:30:34.200 12:52:53 -- setup/common.sh@19 -- # local var val 00:30:34.200 12:52:53 -- setup/common.sh@20 -- # local mem_f mem 00:30:34.200 12:52:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:30:34.200 12:52:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:30:34.200 12:52:53 -- setup/common.sh@25 -- # [[ -n '' ]] 00:30:34.200 12:52:53 -- setup/common.sh@28 -- # mapfile -t mem 00:30:34.200 12:52:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:30:34.200 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.200 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.200 12:52:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6841752 kB' 'MemAvailable: 9453916 kB' 'Buffers: 2436 kB' 'Cached: 2814748 kB' 'SwapCached: 0 kB' 'Active: 486696 kB' 'Inactive: 2444476 kB' 'Active(anon): 124456 kB' 'Inactive(anon): 0 kB' 'Active(file): 362240 kB' 'Inactive(file): 2444476 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 115616 kB' 'Mapped: 47952 kB' 'Shmem: 10468 kB' 'KReclaimable: 84772 kB' 'Slab: 164536 kB' 'SReclaimable: 84772 kB' 'SUnreclaim: 79764 kB' 'KernelStack: 6496 kB' 'PageTables: 3876 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 333992 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:30:34.200 12:52:53 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:34.200 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.200 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.200 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.200 12:52:53 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:34.200 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.200 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.200 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.200 12:52:53 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:34.200 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.200 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.200 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.200 12:52:53 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:34.200 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.200 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.200 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.200 12:52:53 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:34.200 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.200 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.200 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.200 12:52:53 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:34.200 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.200 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.200 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.200 12:52:53 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:34.200 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.200 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.200 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.200 12:52:53 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:34.200 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.200 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.200 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.200 12:52:53 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:34.200 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.200 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.200 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.200 12:52:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:34.201 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.201 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.201 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.201 12:52:53 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:34.201 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.201 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.201 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.201 12:52:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:34.201 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.201 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.201 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.201 12:52:53 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:34.201 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.201 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.201 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.201 12:52:53 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:34.201 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.201 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.201 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.201 12:52:53 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:34.201 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.201 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.201 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.201 12:52:53 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:34.201 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.201 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.201 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.201 12:52:53 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:34.201 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.201 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.201 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.201 12:52:53 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:34.201 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.201 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.201 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.201 12:52:53 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:34.201 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.201 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.201 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.201 12:52:53 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:34.201 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.201 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.201 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.201 12:52:53 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:34.201 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.201 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.201 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.201 12:52:53 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:34.201 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.201 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.201 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.201 12:52:53 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:34.201 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.201 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.201 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.201 12:52:53 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:34.201 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.201 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.201 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.201 12:52:53 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:34.201 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.201 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.201 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.201 12:52:53 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:34.201 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.201 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.201 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.201 12:52:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:34.201 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.201 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.201 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.201 12:52:53 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:34.201 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.201 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.201 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.201 12:52:53 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:34.201 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.201 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.201 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.201 12:52:53 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:34.201 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.201 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.201 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.201 12:52:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:34.201 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.201 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.201 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.201 12:52:53 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:34.201 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.201 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.201 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.201 12:52:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:34.201 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.201 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.201 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.201 12:52:53 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:34.201 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.201 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.201 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.201 12:52:53 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:34.201 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.201 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.201 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.201 12:52:53 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:34.201 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.201 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.201 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.201 12:52:53 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:34.201 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.201 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.201 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.201 12:52:53 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:34.201 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.201 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.201 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.201 12:52:53 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:34.201 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.201 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.201 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.201 12:52:53 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:34.201 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.201 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.201 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.201 12:52:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:34.201 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.201 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.201 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.201 12:52:53 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:34.201 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.201 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.201 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.201 12:52:53 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:34.201 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.201 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.201 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.201 12:52:53 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:34.201 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.201 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.201 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.201 12:52:53 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:34.201 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.201 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.201 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.201 12:52:53 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:34.202 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.202 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.202 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.202 12:52:53 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:34.202 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.202 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.202 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.202 12:52:53 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:34.202 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.202 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.202 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.202 12:52:53 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:34.202 12:52:53 -- setup/common.sh@33 -- # echo 1024 00:30:34.202 12:52:53 -- setup/common.sh@33 -- # return 0 00:30:34.202 12:52:53 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:30:34.202 12:52:53 -- setup/hugepages.sh@112 -- # get_nodes 00:30:34.202 12:52:53 -- setup/hugepages.sh@27 -- # local node 00:30:34.202 12:52:53 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:30:34.202 12:52:53 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:30:34.202 12:52:53 -- setup/hugepages.sh@32 -- # no_nodes=1 00:30:34.202 12:52:53 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:30:34.202 12:52:53 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:30:34.202 12:52:53 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:30:34.202 12:52:53 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:30:34.202 12:52:53 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:30:34.202 12:52:53 -- setup/common.sh@18 -- # local node=0 00:30:34.202 12:52:53 -- setup/common.sh@19 -- # local var val 00:30:34.202 12:52:53 -- setup/common.sh@20 -- # local mem_f mem 00:30:34.202 12:52:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:30:34.202 12:52:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:30:34.202 12:52:53 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:30:34.202 12:52:53 -- setup/common.sh@28 -- # mapfile -t mem 00:30:34.202 12:52:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:30:34.202 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.202 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.202 12:52:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6843512 kB' 'MemUsed: 5398468 kB' 'SwapCached: 0 kB' 'Active: 486544 kB' 'Inactive: 2444476 kB' 'Active(anon): 124304 kB' 'Inactive(anon): 0 kB' 'Active(file): 362240 kB' 'Inactive(file): 2444476 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 2817184 kB' 'Mapped: 47952 kB' 'AnonPages: 115496 kB' 'Shmem: 10468 kB' 'KernelStack: 6480 kB' 'PageTables: 3832 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 84772 kB' 'Slab: 164536 kB' 'SReclaimable: 84772 kB' 'SUnreclaim: 79764 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:30:34.202 12:52:53 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:34.202 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.202 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.202 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.202 12:52:53 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:34.202 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.202 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.202 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.202 12:52:53 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:34.202 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.202 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.202 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.202 12:52:53 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:34.202 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.202 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.202 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.202 12:52:53 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:34.202 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.202 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.202 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.202 12:52:53 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:34.202 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.202 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.202 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.202 12:52:53 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:34.202 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.202 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.202 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.202 12:52:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:34.202 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.202 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.202 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.202 12:52:53 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:34.202 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.202 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.202 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.202 12:52:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:34.202 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.202 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.202 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.202 12:52:53 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:34.202 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.202 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.202 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.202 12:52:53 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:34.202 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.202 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.202 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.202 12:52:53 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:34.202 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.202 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.202 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.202 12:52:53 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:34.202 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.202 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.202 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.202 12:52:53 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:34.202 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.202 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.202 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.202 12:52:53 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:34.202 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.202 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.202 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.202 12:52:53 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:34.202 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.202 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.202 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.202 12:52:53 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:34.202 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.202 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.202 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.202 12:52:53 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:34.202 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.202 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.202 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.202 12:52:53 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:34.202 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.202 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.202 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.202 12:52:53 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:34.202 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.202 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.202 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.202 12:52:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:34.202 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.202 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.202 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.202 12:52:53 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:34.202 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.202 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.202 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.202 12:52:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:34.202 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.202 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.202 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.202 12:52:53 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:34.202 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.202 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.202 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.202 12:52:53 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:34.202 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.202 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.202 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.202 12:52:53 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:34.202 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.202 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.202 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.202 12:52:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:34.202 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.202 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.203 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.203 12:52:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:34.203 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.203 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.203 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.203 12:52:53 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:34.203 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.203 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.203 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.203 12:52:53 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:34.203 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.203 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.203 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.203 12:52:53 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:34.203 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.203 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.203 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.203 12:52:53 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:34.203 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.203 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.203 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.203 12:52:53 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:34.203 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.203 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.203 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.203 12:52:53 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:34.203 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.203 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.203 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.203 12:52:53 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:34.203 12:52:53 -- setup/common.sh@32 -- # continue 00:30:34.203 12:52:53 -- setup/common.sh@31 -- # IFS=': ' 00:30:34.203 12:52:53 -- setup/common.sh@31 -- # read -r var val _ 00:30:34.203 12:52:53 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:34.203 12:52:53 -- setup/common.sh@33 -- # echo 0 00:30:34.203 12:52:53 -- setup/common.sh@33 -- # return 0 00:30:34.203 12:52:53 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:30:34.203 12:52:53 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:30:34.203 12:52:53 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:30:34.203 12:52:53 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:30:34.203 node0=1024 expecting 1024 00:30:34.203 12:52:53 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:30:34.203 12:52:53 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:30:34.203 00:30:34.203 real 0m1.060s 00:30:34.203 user 0m0.561s 00:30:34.203 sys 0m0.538s 00:30:34.203 12:52:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:34.203 12:52:53 -- common/autotest_common.sh@10 -- # set +x 00:30:34.203 ************************************ 00:30:34.203 END TEST no_shrink_alloc 00:30:34.203 ************************************ 00:30:34.203 12:52:53 -- setup/hugepages.sh@217 -- # clear_hp 00:30:34.203 12:52:53 -- setup/hugepages.sh@37 -- # local node hp 00:30:34.203 12:52:53 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:30:34.203 12:52:53 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:30:34.203 12:52:53 -- setup/hugepages.sh@41 -- # echo 0 00:30:34.203 12:52:53 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:30:34.203 12:52:53 -- setup/hugepages.sh@41 -- # echo 0 00:30:34.203 12:52:53 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:30:34.203 12:52:53 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:30:34.203 00:30:34.203 real 0m4.784s 00:30:34.203 user 0m2.325s 00:30:34.203 sys 0m2.411s 00:30:34.203 12:52:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:34.203 12:52:53 -- common/autotest_common.sh@10 -- # set +x 00:30:34.203 ************************************ 00:30:34.203 END TEST hugepages 00:30:34.203 ************************************ 00:30:34.203 12:52:53 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:30:34.203 12:52:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:34.203 12:52:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:34.203 12:52:53 -- common/autotest_common.sh@10 -- # set +x 00:30:34.462 ************************************ 00:30:34.462 START TEST driver 00:30:34.462 ************************************ 00:30:34.462 12:52:53 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:30:34.462 * Looking for test storage... 00:30:34.462 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:30:34.462 12:52:53 -- setup/driver.sh@68 -- # setup reset 00:30:34.462 12:52:53 -- setup/common.sh@9 -- # [[ reset == output ]] 00:30:34.462 12:52:53 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:30:35.030 12:52:54 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:30:35.030 12:52:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:35.030 12:52:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:35.030 12:52:54 -- common/autotest_common.sh@10 -- # set +x 00:30:35.030 ************************************ 00:30:35.030 START TEST guess_driver 00:30:35.030 ************************************ 00:30:35.030 12:52:54 -- common/autotest_common.sh@1104 -- # guess_driver 00:30:35.030 12:52:54 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:30:35.030 12:52:54 -- setup/driver.sh@47 -- # local fail=0 00:30:35.030 12:52:54 -- setup/driver.sh@49 -- # pick_driver 00:30:35.030 12:52:54 -- setup/driver.sh@36 -- # vfio 00:30:35.030 12:52:54 -- setup/driver.sh@21 -- # local iommu_grups 00:30:35.030 12:52:54 -- setup/driver.sh@22 -- # local unsafe_vfio 00:30:35.030 12:52:54 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:30:35.030 12:52:54 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:30:35.030 12:52:54 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:30:35.030 12:52:54 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:30:35.030 12:52:54 -- setup/driver.sh@32 -- # return 1 00:30:35.030 12:52:54 -- setup/driver.sh@38 -- # uio 00:30:35.030 12:52:54 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:30:35.030 12:52:54 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:30:35.030 12:52:54 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:30:35.030 12:52:54 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:30:35.030 12:52:54 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:30:35.030 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:30:35.030 12:52:54 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:30:35.030 12:52:54 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:30:35.030 Looking for driver=uio_pci_generic 00:30:35.030 12:52:54 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:30:35.030 12:52:54 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:30:35.030 12:52:54 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:30:35.030 12:52:54 -- setup/driver.sh@45 -- # setup output config 00:30:35.030 12:52:54 -- setup/common.sh@9 -- # [[ output == output ]] 00:30:35.030 12:52:54 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:30:35.599 12:52:54 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:30:35.599 12:52:54 -- setup/driver.sh@58 -- # continue 00:30:35.599 12:52:54 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:30:35.599 12:52:54 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:30:35.599 12:52:54 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:30:35.599 12:52:54 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:30:35.599 12:52:54 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:30:35.599 12:52:54 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:30:35.599 12:52:54 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:30:35.858 12:52:55 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:30:35.858 12:52:55 -- setup/driver.sh@65 -- # setup reset 00:30:35.858 12:52:55 -- setup/common.sh@9 -- # [[ reset == output ]] 00:30:35.858 12:52:55 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:30:36.428 00:30:36.428 real 0m1.381s 00:30:36.428 user 0m0.530s 00:30:36.428 sys 0m0.852s 00:30:36.428 12:52:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:36.428 12:52:55 -- common/autotest_common.sh@10 -- # set +x 00:30:36.428 ************************************ 00:30:36.428 END TEST guess_driver 00:30:36.428 ************************************ 00:30:36.428 00:30:36.428 real 0m2.048s 00:30:36.428 user 0m0.751s 00:30:36.428 sys 0m1.345s 00:30:36.428 12:52:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:36.428 12:52:55 -- common/autotest_common.sh@10 -- # set +x 00:30:36.428 ************************************ 00:30:36.428 END TEST driver 00:30:36.428 ************************************ 00:30:36.428 12:52:55 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:30:36.428 12:52:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:36.428 12:52:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:36.428 12:52:55 -- common/autotest_common.sh@10 -- # set +x 00:30:36.428 ************************************ 00:30:36.428 START TEST devices 00:30:36.428 ************************************ 00:30:36.428 12:52:55 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:30:36.428 * Looking for test storage... 00:30:36.428 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:30:36.428 12:52:55 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:30:36.428 12:52:55 -- setup/devices.sh@192 -- # setup reset 00:30:36.428 12:52:55 -- setup/common.sh@9 -- # [[ reset == output ]] 00:30:36.428 12:52:55 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:30:37.384 12:52:56 -- setup/devices.sh@194 -- # get_zoned_devs 00:30:37.384 12:52:56 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:30:37.384 12:52:56 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:30:37.384 12:52:56 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:30:37.384 12:52:56 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:30:37.384 12:52:56 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:30:37.384 12:52:56 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:30:37.384 12:52:56 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:30:37.384 12:52:56 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:30:37.384 12:52:56 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:30:37.384 12:52:56 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:30:37.384 12:52:56 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:30:37.384 12:52:56 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:30:37.384 12:52:56 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:30:37.384 12:52:56 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:30:37.384 12:52:56 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n2 00:30:37.384 12:52:56 -- common/autotest_common.sh@1647 -- # local device=nvme1n2 00:30:37.384 12:52:56 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:30:37.384 12:52:56 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:30:37.384 12:52:56 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:30:37.384 12:52:56 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n3 00:30:37.384 12:52:56 -- common/autotest_common.sh@1647 -- # local device=nvme1n3 00:30:37.384 12:52:56 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:30:37.384 12:52:56 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:30:37.384 12:52:56 -- setup/devices.sh@196 -- # blocks=() 00:30:37.384 12:52:56 -- setup/devices.sh@196 -- # declare -a blocks 00:30:37.384 12:52:56 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:30:37.384 12:52:56 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:30:37.384 12:52:56 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:30:37.384 12:52:56 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:30:37.384 12:52:56 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:30:37.384 12:52:56 -- setup/devices.sh@201 -- # ctrl=nvme0 00:30:37.384 12:52:56 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:30:37.384 12:52:56 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:30:37.384 12:52:56 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:30:37.384 12:52:56 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:30:37.384 12:52:56 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:30:37.384 No valid GPT data, bailing 00:30:37.384 12:52:56 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:30:37.384 12:52:56 -- scripts/common.sh@393 -- # pt= 00:30:37.384 12:52:56 -- scripts/common.sh@394 -- # return 1 00:30:37.384 12:52:56 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:30:37.384 12:52:56 -- setup/common.sh@76 -- # local dev=nvme0n1 00:30:37.384 12:52:56 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:30:37.384 12:52:56 -- setup/common.sh@80 -- # echo 5368709120 00:30:37.384 12:52:56 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:30:37.384 12:52:56 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:30:37.384 12:52:56 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:30:37.384 12:52:56 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:30:37.384 12:52:56 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:30:37.384 12:52:56 -- setup/devices.sh@201 -- # ctrl=nvme1 00:30:37.384 12:52:56 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:30:37.384 12:52:56 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:30:37.384 12:52:56 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:30:37.384 12:52:56 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:30:37.384 12:52:56 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:30:37.384 No valid GPT data, bailing 00:30:37.384 12:52:56 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:30:37.384 12:52:56 -- scripts/common.sh@393 -- # pt= 00:30:37.384 12:52:56 -- scripts/common.sh@394 -- # return 1 00:30:37.384 12:52:56 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:30:37.384 12:52:56 -- setup/common.sh@76 -- # local dev=nvme1n1 00:30:37.384 12:52:56 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:30:37.384 12:52:56 -- setup/common.sh@80 -- # echo 4294967296 00:30:37.384 12:52:56 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:30:37.384 12:52:56 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:30:37.384 12:52:56 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:30:37.384 12:52:56 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:30:37.384 12:52:56 -- setup/devices.sh@201 -- # ctrl=nvme1n2 00:30:37.384 12:52:56 -- setup/devices.sh@201 -- # ctrl=nvme1 00:30:37.385 12:52:56 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:30:37.385 12:52:56 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:30:37.385 12:52:56 -- setup/devices.sh@204 -- # block_in_use nvme1n2 00:30:37.385 12:52:56 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:30:37.385 12:52:56 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:30:37.385 No valid GPT data, bailing 00:30:37.385 12:52:56 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:30:37.385 12:52:56 -- scripts/common.sh@393 -- # pt= 00:30:37.385 12:52:56 -- scripts/common.sh@394 -- # return 1 00:30:37.385 12:52:56 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n2 00:30:37.385 12:52:56 -- setup/common.sh@76 -- # local dev=nvme1n2 00:30:37.385 12:52:56 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n2 ]] 00:30:37.385 12:52:56 -- setup/common.sh@80 -- # echo 4294967296 00:30:37.385 12:52:56 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:30:37.385 12:52:56 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:30:37.385 12:52:56 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:30:37.385 12:52:56 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:30:37.385 12:52:56 -- setup/devices.sh@201 -- # ctrl=nvme1n3 00:30:37.385 12:52:56 -- setup/devices.sh@201 -- # ctrl=nvme1 00:30:37.385 12:52:56 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:30:37.385 12:52:56 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:30:37.385 12:52:56 -- setup/devices.sh@204 -- # block_in_use nvme1n3 00:30:37.385 12:52:56 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:30:37.385 12:52:56 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:30:37.385 No valid GPT data, bailing 00:30:37.385 12:52:56 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:30:37.385 12:52:56 -- scripts/common.sh@393 -- # pt= 00:30:37.385 12:52:56 -- scripts/common.sh@394 -- # return 1 00:30:37.385 12:52:56 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n3 00:30:37.385 12:52:56 -- setup/common.sh@76 -- # local dev=nvme1n3 00:30:37.385 12:52:56 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n3 ]] 00:30:37.385 12:52:56 -- setup/common.sh@80 -- # echo 4294967296 00:30:37.385 12:52:56 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:30:37.385 12:52:56 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:30:37.668 12:52:56 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:30:37.668 12:52:56 -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:30:37.668 12:52:56 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:30:37.668 12:52:56 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:30:37.668 12:52:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:37.668 12:52:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:37.668 12:52:56 -- common/autotest_common.sh@10 -- # set +x 00:30:37.668 ************************************ 00:30:37.668 START TEST nvme_mount 00:30:37.668 ************************************ 00:30:37.668 12:52:56 -- common/autotest_common.sh@1104 -- # nvme_mount 00:30:37.668 12:52:56 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:30:37.668 12:52:56 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:30:37.668 12:52:56 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:30:37.668 12:52:56 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:30:37.668 12:52:56 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:30:37.668 12:52:56 -- setup/common.sh@39 -- # local disk=nvme0n1 00:30:37.668 12:52:56 -- setup/common.sh@40 -- # local part_no=1 00:30:37.668 12:52:56 -- setup/common.sh@41 -- # local size=1073741824 00:30:37.668 12:52:56 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:30:37.668 12:52:56 -- setup/common.sh@44 -- # parts=() 00:30:37.668 12:52:56 -- setup/common.sh@44 -- # local parts 00:30:37.668 12:52:56 -- setup/common.sh@46 -- # (( part = 1 )) 00:30:37.668 12:52:56 -- setup/common.sh@46 -- # (( part <= part_no )) 00:30:37.668 12:52:56 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:30:37.668 12:52:56 -- setup/common.sh@46 -- # (( part++ )) 00:30:37.668 12:52:56 -- setup/common.sh@46 -- # (( part <= part_no )) 00:30:37.668 12:52:56 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:30:37.668 12:52:56 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:30:37.668 12:52:56 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:30:38.639 Creating new GPT entries in memory. 00:30:38.639 GPT data structures destroyed! You may now partition the disk using fdisk or 00:30:38.639 other utilities. 00:30:38.639 12:52:57 -- setup/common.sh@57 -- # (( part = 1 )) 00:30:38.639 12:52:57 -- setup/common.sh@57 -- # (( part <= part_no )) 00:30:38.639 12:52:57 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:30:38.639 12:52:57 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:30:38.639 12:52:57 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:30:39.574 Creating new GPT entries in memory. 00:30:39.574 The operation has completed successfully. 00:30:39.574 12:52:58 -- setup/common.sh@57 -- # (( part++ )) 00:30:39.574 12:52:58 -- setup/common.sh@57 -- # (( part <= part_no )) 00:30:39.574 12:52:58 -- setup/common.sh@62 -- # wait 65635 00:30:39.574 12:52:58 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:30:39.574 12:52:58 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:30:39.574 12:52:58 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:30:39.574 12:52:58 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:30:39.574 12:52:58 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:30:39.574 12:52:58 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:30:39.574 12:52:58 -- setup/devices.sh@105 -- # verify 0000:00:06.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:30:39.574 12:52:58 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:30:39.574 12:52:58 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:30:39.574 12:52:58 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:30:39.574 12:52:58 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:30:39.574 12:52:58 -- setup/devices.sh@53 -- # local found=0 00:30:39.574 12:52:58 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:30:39.574 12:52:58 -- setup/devices.sh@56 -- # : 00:30:39.574 12:52:58 -- setup/devices.sh@59 -- # local pci status 00:30:39.574 12:52:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:30:39.574 12:52:58 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:30:39.574 12:52:58 -- setup/devices.sh@47 -- # setup output config 00:30:39.574 12:52:58 -- setup/common.sh@9 -- # [[ output == output ]] 00:30:39.574 12:52:58 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:30:39.833 12:52:59 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:30:39.833 12:52:59 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:30:39.833 12:52:59 -- setup/devices.sh@63 -- # found=1 00:30:39.833 12:52:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:30:39.833 12:52:59 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:30:39.833 12:52:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:30:40.092 12:52:59 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:30:40.092 12:52:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:30:40.092 12:52:59 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:30:40.092 12:52:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:30:40.350 12:52:59 -- setup/devices.sh@66 -- # (( found == 1 )) 00:30:40.350 12:52:59 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:30:40.350 12:52:59 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:30:40.350 12:52:59 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:30:40.350 12:52:59 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:30:40.350 12:52:59 -- setup/devices.sh@110 -- # cleanup_nvme 00:30:40.350 12:52:59 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:30:40.350 12:52:59 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:30:40.350 12:52:59 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:30:40.350 12:52:59 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:30:40.350 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:30:40.350 12:52:59 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:30:40.350 12:52:59 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:30:40.609 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:30:40.609 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:30:40.609 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:30:40.609 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:30:40.609 12:52:59 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:30:40.609 12:52:59 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:30:40.609 12:52:59 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:30:40.609 12:52:59 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:30:40.609 12:52:59 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:30:40.609 12:52:59 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:30:40.609 12:52:59 -- setup/devices.sh@116 -- # verify 0000:00:06.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:30:40.609 12:52:59 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:30:40.609 12:52:59 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:30:40.609 12:52:59 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:30:40.609 12:52:59 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:30:40.609 12:52:59 -- setup/devices.sh@53 -- # local found=0 00:30:40.609 12:52:59 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:30:40.609 12:52:59 -- setup/devices.sh@56 -- # : 00:30:40.609 12:52:59 -- setup/devices.sh@59 -- # local pci status 00:30:40.609 12:52:59 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:30:40.609 12:52:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:30:40.609 12:52:59 -- setup/devices.sh@47 -- # setup output config 00:30:40.609 12:52:59 -- setup/common.sh@9 -- # [[ output == output ]] 00:30:40.609 12:52:59 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:30:40.868 12:53:00 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:30:40.868 12:53:00 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:30:40.868 12:53:00 -- setup/devices.sh@63 -- # found=1 00:30:40.868 12:53:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:30:40.868 12:53:00 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:30:40.868 12:53:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:30:41.130 12:53:00 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:30:41.130 12:53:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:30:41.130 12:53:00 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:30:41.130 12:53:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:30:41.130 12:53:00 -- setup/devices.sh@66 -- # (( found == 1 )) 00:30:41.130 12:53:00 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:30:41.130 12:53:00 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:30:41.130 12:53:00 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:30:41.130 12:53:00 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:30:41.130 12:53:00 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:30:41.130 12:53:00 -- setup/devices.sh@125 -- # verify 0000:00:06.0 data@nvme0n1 '' '' 00:30:41.130 12:53:00 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:30:41.130 12:53:00 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:30:41.130 12:53:00 -- setup/devices.sh@50 -- # local mount_point= 00:30:41.130 12:53:00 -- setup/devices.sh@51 -- # local test_file= 00:30:41.130 12:53:00 -- setup/devices.sh@53 -- # local found=0 00:30:41.130 12:53:00 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:30:41.131 12:53:00 -- setup/devices.sh@59 -- # local pci status 00:30:41.131 12:53:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:30:41.131 12:53:00 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:30:41.131 12:53:00 -- setup/devices.sh@47 -- # setup output config 00:30:41.131 12:53:00 -- setup/common.sh@9 -- # [[ output == output ]] 00:30:41.131 12:53:00 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:30:41.393 12:53:00 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:30:41.393 12:53:00 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:30:41.393 12:53:00 -- setup/devices.sh@63 -- # found=1 00:30:41.393 12:53:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:30:41.393 12:53:00 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:30:41.393 12:53:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:30:41.963 12:53:01 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:30:41.963 12:53:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:30:41.963 12:53:01 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:30:41.963 12:53:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:30:41.963 12:53:01 -- setup/devices.sh@66 -- # (( found == 1 )) 00:30:41.963 12:53:01 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:30:41.963 12:53:01 -- setup/devices.sh@68 -- # return 0 00:30:41.963 12:53:01 -- setup/devices.sh@128 -- # cleanup_nvme 00:30:41.963 12:53:01 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:30:41.963 12:53:01 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:30:41.963 12:53:01 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:30:41.963 12:53:01 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:30:41.963 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:30:41.963 00:30:41.963 real 0m4.503s 00:30:41.963 user 0m0.994s 00:30:41.963 sys 0m1.218s 00:30:41.964 12:53:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:41.964 12:53:01 -- common/autotest_common.sh@10 -- # set +x 00:30:41.964 ************************************ 00:30:41.964 END TEST nvme_mount 00:30:41.964 ************************************ 00:30:41.964 12:53:01 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:30:41.964 12:53:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:41.964 12:53:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:41.964 12:53:01 -- common/autotest_common.sh@10 -- # set +x 00:30:41.964 ************************************ 00:30:41.964 START TEST dm_mount 00:30:41.964 ************************************ 00:30:41.964 12:53:01 -- common/autotest_common.sh@1104 -- # dm_mount 00:30:41.964 12:53:01 -- setup/devices.sh@144 -- # pv=nvme0n1 00:30:41.964 12:53:01 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:30:41.964 12:53:01 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:30:41.964 12:53:01 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:30:41.964 12:53:01 -- setup/common.sh@39 -- # local disk=nvme0n1 00:30:41.964 12:53:01 -- setup/common.sh@40 -- # local part_no=2 00:30:41.964 12:53:01 -- setup/common.sh@41 -- # local size=1073741824 00:30:41.964 12:53:01 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:30:41.964 12:53:01 -- setup/common.sh@44 -- # parts=() 00:30:41.964 12:53:01 -- setup/common.sh@44 -- # local parts 00:30:41.964 12:53:01 -- setup/common.sh@46 -- # (( part = 1 )) 00:30:41.964 12:53:01 -- setup/common.sh@46 -- # (( part <= part_no )) 00:30:41.964 12:53:01 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:30:41.964 12:53:01 -- setup/common.sh@46 -- # (( part++ )) 00:30:41.964 12:53:01 -- setup/common.sh@46 -- # (( part <= part_no )) 00:30:41.964 12:53:01 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:30:41.964 12:53:01 -- setup/common.sh@46 -- # (( part++ )) 00:30:41.964 12:53:01 -- setup/common.sh@46 -- # (( part <= part_no )) 00:30:41.964 12:53:01 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:30:41.964 12:53:01 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:30:41.964 12:53:01 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:30:43.340 Creating new GPT entries in memory. 00:30:43.340 GPT data structures destroyed! You may now partition the disk using fdisk or 00:30:43.340 other utilities. 00:30:43.340 12:53:02 -- setup/common.sh@57 -- # (( part = 1 )) 00:30:43.340 12:53:02 -- setup/common.sh@57 -- # (( part <= part_no )) 00:30:43.340 12:53:02 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:30:43.340 12:53:02 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:30:43.340 12:53:02 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:30:44.276 Creating new GPT entries in memory. 00:30:44.276 The operation has completed successfully. 00:30:44.276 12:53:03 -- setup/common.sh@57 -- # (( part++ )) 00:30:44.276 12:53:03 -- setup/common.sh@57 -- # (( part <= part_no )) 00:30:44.276 12:53:03 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:30:44.276 12:53:03 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:30:44.276 12:53:03 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:30:45.213 The operation has completed successfully. 00:30:45.213 12:53:04 -- setup/common.sh@57 -- # (( part++ )) 00:30:45.213 12:53:04 -- setup/common.sh@57 -- # (( part <= part_no )) 00:30:45.213 12:53:04 -- setup/common.sh@62 -- # wait 66095 00:30:45.213 12:53:04 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:30:45.213 12:53:04 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:30:45.213 12:53:04 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:30:45.213 12:53:04 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:30:45.213 12:53:04 -- setup/devices.sh@160 -- # for t in {1..5} 00:30:45.213 12:53:04 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:30:45.213 12:53:04 -- setup/devices.sh@161 -- # break 00:30:45.213 12:53:04 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:30:45.213 12:53:04 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:30:45.213 12:53:04 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:30:45.213 12:53:04 -- setup/devices.sh@166 -- # dm=dm-0 00:30:45.213 12:53:04 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:30:45.213 12:53:04 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:30:45.213 12:53:04 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:30:45.213 12:53:04 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:30:45.213 12:53:04 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:30:45.213 12:53:04 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:30:45.213 12:53:04 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:30:45.213 12:53:04 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:30:45.213 12:53:04 -- setup/devices.sh@174 -- # verify 0000:00:06.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:30:45.213 12:53:04 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:30:45.213 12:53:04 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:30:45.213 12:53:04 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:30:45.213 12:53:04 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:30:45.213 12:53:04 -- setup/devices.sh@53 -- # local found=0 00:30:45.213 12:53:04 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:30:45.213 12:53:04 -- setup/devices.sh@56 -- # : 00:30:45.213 12:53:04 -- setup/devices.sh@59 -- # local pci status 00:30:45.213 12:53:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:30:45.213 12:53:04 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:30:45.213 12:53:04 -- setup/devices.sh@47 -- # setup output config 00:30:45.213 12:53:04 -- setup/common.sh@9 -- # [[ output == output ]] 00:30:45.213 12:53:04 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:30:45.472 12:53:04 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:30:45.472 12:53:04 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:30:45.472 12:53:04 -- setup/devices.sh@63 -- # found=1 00:30:45.472 12:53:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:30:45.472 12:53:04 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:30:45.472 12:53:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:30:45.732 12:53:05 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:30:45.732 12:53:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:30:45.732 12:53:05 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:30:45.732 12:53:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:30:45.991 12:53:05 -- setup/devices.sh@66 -- # (( found == 1 )) 00:30:45.991 12:53:05 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:30:45.991 12:53:05 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:30:45.991 12:53:05 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:30:45.991 12:53:05 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:30:45.991 12:53:05 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:30:45.991 12:53:05 -- setup/devices.sh@184 -- # verify 0000:00:06.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:30:45.991 12:53:05 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:30:45.991 12:53:05 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:30:45.991 12:53:05 -- setup/devices.sh@50 -- # local mount_point= 00:30:45.991 12:53:05 -- setup/devices.sh@51 -- # local test_file= 00:30:45.991 12:53:05 -- setup/devices.sh@53 -- # local found=0 00:30:45.991 12:53:05 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:30:45.991 12:53:05 -- setup/devices.sh@59 -- # local pci status 00:30:45.991 12:53:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:30:45.991 12:53:05 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:30:45.991 12:53:05 -- setup/devices.sh@47 -- # setup output config 00:30:45.991 12:53:05 -- setup/common.sh@9 -- # [[ output == output ]] 00:30:45.991 12:53:05 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:30:45.991 12:53:05 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:30:45.991 12:53:05 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:30:45.991 12:53:05 -- setup/devices.sh@63 -- # found=1 00:30:45.991 12:53:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:30:45.991 12:53:05 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:30:45.991 12:53:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:30:46.560 12:53:05 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:30:46.560 12:53:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:30:46.560 12:53:05 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:30:46.560 12:53:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:30:46.560 12:53:05 -- setup/devices.sh@66 -- # (( found == 1 )) 00:30:46.560 12:53:05 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:30:46.560 12:53:05 -- setup/devices.sh@68 -- # return 0 00:30:46.560 12:53:05 -- setup/devices.sh@187 -- # cleanup_dm 00:30:46.560 12:53:05 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:30:46.560 12:53:05 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:30:46.560 12:53:05 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:30:46.560 12:53:05 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:30:46.560 12:53:05 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:30:46.560 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:30:46.560 12:53:05 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:30:46.560 12:53:05 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:30:46.560 00:30:46.560 real 0m4.536s 00:30:46.560 user 0m0.647s 00:30:46.560 sys 0m0.824s 00:30:46.560 ************************************ 00:30:46.560 END TEST dm_mount 00:30:46.560 ************************************ 00:30:46.560 12:53:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:46.560 12:53:05 -- common/autotest_common.sh@10 -- # set +x 00:30:46.560 12:53:05 -- setup/devices.sh@1 -- # cleanup 00:30:46.560 12:53:05 -- setup/devices.sh@11 -- # cleanup_nvme 00:30:46.560 12:53:05 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:30:46.560 12:53:05 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:30:46.560 12:53:05 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:30:46.560 12:53:05 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:30:46.560 12:53:05 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:30:46.819 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:30:46.819 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:30:46.819 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:30:46.819 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:30:46.819 12:53:06 -- setup/devices.sh@12 -- # cleanup_dm 00:30:46.819 12:53:06 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:30:47.078 12:53:06 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:30:47.078 12:53:06 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:30:47.078 12:53:06 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:30:47.078 12:53:06 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:30:47.078 12:53:06 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:30:47.078 ************************************ 00:30:47.078 END TEST devices 00:30:47.078 ************************************ 00:30:47.078 00:30:47.078 real 0m10.596s 00:30:47.078 user 0m2.316s 00:30:47.078 sys 0m2.623s 00:30:47.078 12:53:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:47.078 12:53:06 -- common/autotest_common.sh@10 -- # set +x 00:30:47.078 ************************************ 00:30:47.078 END TEST setup.sh 00:30:47.078 ************************************ 00:30:47.078 00:30:47.078 real 0m21.940s 00:30:47.078 user 0m7.302s 00:30:47.078 sys 0m8.946s 00:30:47.078 12:53:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:47.078 12:53:06 -- common/autotest_common.sh@10 -- # set +x 00:30:47.078 12:53:06 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:30:47.078 Hugepages 00:30:47.078 node hugesize free / total 00:30:47.078 node0 1048576kB 0 / 0 00:30:47.078 node0 2048kB 2048 / 2048 00:30:47.078 00:30:47.078 Type BDF Vendor Device NUMA Driver Device Block devices 00:30:47.337 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:30:47.337 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:30:47.337 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:30:47.337 12:53:06 -- spdk/autotest.sh@141 -- # uname -s 00:30:47.337 12:53:06 -- spdk/autotest.sh@141 -- # [[ Linux == Linux ]] 00:30:47.337 12:53:06 -- spdk/autotest.sh@143 -- # nvme_namespace_revert 00:30:47.337 12:53:06 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:30:47.905 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:48.164 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:30:48.164 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:30:48.164 12:53:07 -- common/autotest_common.sh@1517 -- # sleep 1 00:30:49.542 12:53:08 -- common/autotest_common.sh@1518 -- # bdfs=() 00:30:49.542 12:53:08 -- common/autotest_common.sh@1518 -- # local bdfs 00:30:49.542 12:53:08 -- common/autotest_common.sh@1519 -- # bdfs=($(get_nvme_bdfs)) 00:30:49.542 12:53:08 -- common/autotest_common.sh@1519 -- # get_nvme_bdfs 00:30:49.542 12:53:08 -- common/autotest_common.sh@1498 -- # bdfs=() 00:30:49.542 12:53:08 -- common/autotest_common.sh@1498 -- # local bdfs 00:30:49.542 12:53:08 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:49.542 12:53:08 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:30:49.542 12:53:08 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:30:49.542 12:53:08 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:30:49.542 12:53:08 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:30:49.542 12:53:08 -- common/autotest_common.sh@1521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:30:49.542 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:49.542 Waiting for block devices as requested 00:30:49.801 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:30:49.801 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:30:49.801 12:53:09 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:30:49.801 12:53:09 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:30:49.801 12:53:09 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:30:49.802 12:53:09 -- common/autotest_common.sh@1487 -- # grep 0000:00:06.0/nvme/nvme 00:30:49.802 12:53:09 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:30:49.802 12:53:09 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 ]] 00:30:49.802 12:53:09 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:30:49.802 12:53:09 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:30:49.802 12:53:09 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme0 00:30:49.802 12:53:09 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme0 ]] 00:30:49.802 12:53:09 -- common/autotest_common.sh@1530 -- # grep oacs 00:30:49.802 12:53:09 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme0 00:30:49.802 12:53:09 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:30:49.802 12:53:09 -- common/autotest_common.sh@1530 -- # oacs=' 0x12a' 00:30:49.802 12:53:09 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:30:49.802 12:53:09 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:30:49.802 12:53:09 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme0 00:30:49.802 12:53:09 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:30:49.802 12:53:09 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:30:49.802 12:53:09 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:30:49.802 12:53:09 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:30:49.802 12:53:09 -- common/autotest_common.sh@1542 -- # continue 00:30:49.802 12:53:09 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:30:49.802 12:53:09 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:00:07.0 00:30:49.802 12:53:09 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:30:49.802 12:53:09 -- common/autotest_common.sh@1487 -- # grep 0000:00:07.0/nvme/nvme 00:30:49.802 12:53:09 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:30:49.802 12:53:09 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 ]] 00:30:49.802 12:53:09 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:30:49.802 12:53:09 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:30:49.802 12:53:09 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme1 00:30:49.802 12:53:09 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme1 ]] 00:30:49.802 12:53:09 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme1 00:30:49.802 12:53:09 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:30:49.802 12:53:09 -- common/autotest_common.sh@1530 -- # grep oacs 00:30:49.802 12:53:09 -- common/autotest_common.sh@1530 -- # oacs=' 0x12a' 00:30:49.802 12:53:09 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:30:49.802 12:53:09 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:30:49.802 12:53:09 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme1 00:30:49.802 12:53:09 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:30:49.802 12:53:09 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:30:49.802 12:53:09 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:30:49.802 12:53:09 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:30:49.802 12:53:09 -- common/autotest_common.sh@1542 -- # continue 00:30:49.802 12:53:09 -- spdk/autotest.sh@146 -- # timing_exit pre_cleanup 00:30:49.802 12:53:09 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:49.802 12:53:09 -- common/autotest_common.sh@10 -- # set +x 00:30:50.061 12:53:09 -- spdk/autotest.sh@149 -- # timing_enter afterboot 00:30:50.061 12:53:09 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:50.061 12:53:09 -- common/autotest_common.sh@10 -- # set +x 00:30:50.061 12:53:09 -- spdk/autotest.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:30:50.629 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:50.629 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:30:50.889 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:30:50.889 12:53:10 -- spdk/autotest.sh@151 -- # timing_exit afterboot 00:30:50.889 12:53:10 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:50.889 12:53:10 -- common/autotest_common.sh@10 -- # set +x 00:30:50.889 12:53:10 -- spdk/autotest.sh@155 -- # opal_revert_cleanup 00:30:50.889 12:53:10 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:30:50.889 12:53:10 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:30:50.889 12:53:10 -- common/autotest_common.sh@1562 -- # bdfs=() 00:30:50.889 12:53:10 -- common/autotest_common.sh@1562 -- # local bdfs 00:30:50.889 12:53:10 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:30:50.889 12:53:10 -- common/autotest_common.sh@1498 -- # bdfs=() 00:30:50.889 12:53:10 -- common/autotest_common.sh@1498 -- # local bdfs 00:30:50.889 12:53:10 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:50.889 12:53:10 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:30:50.889 12:53:10 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:30:50.889 12:53:10 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:30:50.889 12:53:10 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:30:50.889 12:53:10 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:30:50.889 12:53:10 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:30:50.889 12:53:10 -- common/autotest_common.sh@1565 -- # device=0x0010 00:30:50.889 12:53:10 -- common/autotest_common.sh@1566 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:30:50.889 12:53:10 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:30:50.889 12:53:10 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:00:07.0/device 00:30:50.889 12:53:10 -- common/autotest_common.sh@1565 -- # device=0x0010 00:30:50.889 12:53:10 -- common/autotest_common.sh@1566 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:30:50.889 12:53:10 -- common/autotest_common.sh@1571 -- # printf '%s\n' 00:30:50.889 12:53:10 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:30:50.889 12:53:10 -- common/autotest_common.sh@1578 -- # return 0 00:30:50.889 12:53:10 -- spdk/autotest.sh@161 -- # '[' 0 -eq 1 ']' 00:30:50.889 12:53:10 -- spdk/autotest.sh@165 -- # '[' 1 -eq 1 ']' 00:30:50.889 12:53:10 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:30:50.889 12:53:10 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:30:50.889 12:53:10 -- spdk/autotest.sh@173 -- # timing_enter lib 00:30:50.889 12:53:10 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:50.889 12:53:10 -- common/autotest_common.sh@10 -- # set +x 00:30:50.889 12:53:10 -- spdk/autotest.sh@175 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:30:50.889 12:53:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:50.889 12:53:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:50.889 12:53:10 -- common/autotest_common.sh@10 -- # set +x 00:30:50.889 ************************************ 00:30:50.889 START TEST env 00:30:50.889 ************************************ 00:30:50.889 12:53:10 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:30:51.148 * Looking for test storage... 00:30:51.148 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:30:51.148 12:53:10 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:30:51.148 12:53:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:51.148 12:53:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:51.148 12:53:10 -- common/autotest_common.sh@10 -- # set +x 00:30:51.148 ************************************ 00:30:51.148 START TEST env_memory 00:30:51.148 ************************************ 00:30:51.148 12:53:10 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:30:51.148 00:30:51.148 00:30:51.148 CUnit - A unit testing framework for C - Version 2.1-3 00:30:51.148 http://cunit.sourceforge.net/ 00:30:51.148 00:30:51.148 00:30:51.148 Suite: memory 00:30:51.148 Test: alloc and free memory map ...[2024-07-22 12:53:10.396695] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:30:51.148 passed 00:30:51.148 Test: mem map translation ...[2024-07-22 12:53:10.428291] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:30:51.148 [2024-07-22 12:53:10.428475] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:30:51.148 [2024-07-22 12:53:10.428538] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:30:51.148 [2024-07-22 12:53:10.428549] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:30:51.148 passed 00:30:51.148 Test: mem map registration ...[2024-07-22 12:53:10.492699] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:30:51.149 [2024-07-22 12:53:10.492745] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:30:51.149 passed 00:30:51.408 Test: mem map adjacent registrations ...passed 00:30:51.408 00:30:51.408 Run Summary: Type Total Ran Passed Failed Inactive 00:30:51.408 suites 1 1 n/a 0 0 00:30:51.408 tests 4 4 4 0 0 00:30:51.408 asserts 152 152 152 0 n/a 00:30:51.408 00:30:51.408 Elapsed time = 0.214 seconds 00:30:51.408 00:30:51.408 real 0m0.235s 00:30:51.408 user 0m0.216s 00:30:51.408 sys 0m0.015s 00:30:51.408 ************************************ 00:30:51.408 END TEST env_memory 00:30:51.408 ************************************ 00:30:51.408 12:53:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:51.408 12:53:10 -- common/autotest_common.sh@10 -- # set +x 00:30:51.408 12:53:10 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:30:51.408 12:53:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:51.408 12:53:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:51.408 12:53:10 -- common/autotest_common.sh@10 -- # set +x 00:30:51.408 ************************************ 00:30:51.408 START TEST env_vtophys 00:30:51.408 ************************************ 00:30:51.408 12:53:10 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:30:51.408 EAL: lib.eal log level changed from notice to debug 00:30:51.408 EAL: Detected lcore 0 as core 0 on socket 0 00:30:51.408 EAL: Detected lcore 1 as core 0 on socket 0 00:30:51.408 EAL: Detected lcore 2 as core 0 on socket 0 00:30:51.408 EAL: Detected lcore 3 as core 0 on socket 0 00:30:51.408 EAL: Detected lcore 4 as core 0 on socket 0 00:30:51.408 EAL: Detected lcore 5 as core 0 on socket 0 00:30:51.408 EAL: Detected lcore 6 as core 0 on socket 0 00:30:51.408 EAL: Detected lcore 7 as core 0 on socket 0 00:30:51.408 EAL: Detected lcore 8 as core 0 on socket 0 00:30:51.408 EAL: Detected lcore 9 as core 0 on socket 0 00:30:51.408 EAL: Maximum logical cores by configuration: 128 00:30:51.408 EAL: Detected CPU lcores: 10 00:30:51.408 EAL: Detected NUMA nodes: 1 00:30:51.408 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:30:51.408 EAL: Detected shared linkage of DPDK 00:30:51.408 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:30:51.408 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:30:51.408 EAL: Registered [vdev] bus. 00:30:51.408 EAL: bus.vdev log level changed from disabled to notice 00:30:51.408 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:30:51.408 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:30:51.408 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:30:51.408 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:30:51.408 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:30:51.408 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:30:51.408 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:30:51.408 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:30:51.408 EAL: No shared files mode enabled, IPC will be disabled 00:30:51.409 EAL: No shared files mode enabled, IPC is disabled 00:30:51.409 EAL: Selected IOVA mode 'PA' 00:30:51.409 EAL: Probing VFIO support... 00:30:51.409 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:30:51.409 EAL: VFIO modules not loaded, skipping VFIO support... 00:30:51.409 EAL: Ask a virtual area of 0x2e000 bytes 00:30:51.409 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:30:51.409 EAL: Setting up physically contiguous memory... 00:30:51.409 EAL: Setting maximum number of open files to 524288 00:30:51.409 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:30:51.409 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:30:51.409 EAL: Ask a virtual area of 0x61000 bytes 00:30:51.409 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:30:51.409 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:30:51.409 EAL: Ask a virtual area of 0x400000000 bytes 00:30:51.409 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:30:51.409 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:30:51.409 EAL: Ask a virtual area of 0x61000 bytes 00:30:51.409 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:30:51.409 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:30:51.409 EAL: Ask a virtual area of 0x400000000 bytes 00:30:51.409 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:30:51.409 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:30:51.409 EAL: Ask a virtual area of 0x61000 bytes 00:30:51.409 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:30:51.409 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:30:51.409 EAL: Ask a virtual area of 0x400000000 bytes 00:30:51.409 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:30:51.409 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:30:51.409 EAL: Ask a virtual area of 0x61000 bytes 00:30:51.409 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:30:51.409 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:30:51.409 EAL: Ask a virtual area of 0x400000000 bytes 00:30:51.409 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:30:51.409 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:30:51.409 EAL: Hugepages will be freed exactly as allocated. 00:30:51.409 EAL: No shared files mode enabled, IPC is disabled 00:30:51.409 EAL: No shared files mode enabled, IPC is disabled 00:30:51.409 EAL: TSC frequency is ~2200000 KHz 00:30:51.409 EAL: Main lcore 0 is ready (tid=7fe90b69ea00;cpuset=[0]) 00:30:51.409 EAL: Trying to obtain current memory policy. 00:30:51.409 EAL: Setting policy MPOL_PREFERRED for socket 0 00:30:51.409 EAL: Restoring previous memory policy: 0 00:30:51.409 EAL: request: mp_malloc_sync 00:30:51.409 EAL: No shared files mode enabled, IPC is disabled 00:30:51.409 EAL: Heap on socket 0 was expanded by 2MB 00:30:51.409 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:30:51.409 EAL: No shared files mode enabled, IPC is disabled 00:30:51.409 EAL: No PCI address specified using 'addr=' in: bus=pci 00:30:51.409 EAL: Mem event callback 'spdk:(nil)' registered 00:30:51.409 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:30:51.409 00:30:51.409 00:30:51.409 CUnit - A unit testing framework for C - Version 2.1-3 00:30:51.409 http://cunit.sourceforge.net/ 00:30:51.409 00:30:51.409 00:30:51.409 Suite: components_suite 00:30:51.409 Test: vtophys_malloc_test ...passed 00:30:51.409 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:30:51.409 EAL: Setting policy MPOL_PREFERRED for socket 0 00:30:51.409 EAL: Restoring previous memory policy: 4 00:30:51.409 EAL: Calling mem event callback 'spdk:(nil)' 00:30:51.409 EAL: request: mp_malloc_sync 00:30:51.409 EAL: No shared files mode enabled, IPC is disabled 00:30:51.409 EAL: Heap on socket 0 was expanded by 4MB 00:30:51.409 EAL: Calling mem event callback 'spdk:(nil)' 00:30:51.409 EAL: request: mp_malloc_sync 00:30:51.409 EAL: No shared files mode enabled, IPC is disabled 00:30:51.409 EAL: Heap on socket 0 was shrunk by 4MB 00:30:51.409 EAL: Trying to obtain current memory policy. 00:30:51.409 EAL: Setting policy MPOL_PREFERRED for socket 0 00:30:51.409 EAL: Restoring previous memory policy: 4 00:30:51.409 EAL: Calling mem event callback 'spdk:(nil)' 00:30:51.409 EAL: request: mp_malloc_sync 00:30:51.409 EAL: No shared files mode enabled, IPC is disabled 00:30:51.409 EAL: Heap on socket 0 was expanded by 6MB 00:30:51.409 EAL: Calling mem event callback 'spdk:(nil)' 00:30:51.409 EAL: request: mp_malloc_sync 00:30:51.409 EAL: No shared files mode enabled, IPC is disabled 00:30:51.409 EAL: Heap on socket 0 was shrunk by 6MB 00:30:51.409 EAL: Trying to obtain current memory policy. 00:30:51.409 EAL: Setting policy MPOL_PREFERRED for socket 0 00:30:51.409 EAL: Restoring previous memory policy: 4 00:30:51.409 EAL: Calling mem event callback 'spdk:(nil)' 00:30:51.409 EAL: request: mp_malloc_sync 00:30:51.409 EAL: No shared files mode enabled, IPC is disabled 00:30:51.409 EAL: Heap on socket 0 was expanded by 10MB 00:30:51.409 EAL: Calling mem event callback 'spdk:(nil)' 00:30:51.409 EAL: request: mp_malloc_sync 00:30:51.409 EAL: No shared files mode enabled, IPC is disabled 00:30:51.409 EAL: Heap on socket 0 was shrunk by 10MB 00:30:51.409 EAL: Trying to obtain current memory policy. 00:30:51.409 EAL: Setting policy MPOL_PREFERRED for socket 0 00:30:51.409 EAL: Restoring previous memory policy: 4 00:30:51.409 EAL: Calling mem event callback 'spdk:(nil)' 00:30:51.409 EAL: request: mp_malloc_sync 00:30:51.409 EAL: No shared files mode enabled, IPC is disabled 00:30:51.409 EAL: Heap on socket 0 was expanded by 18MB 00:30:51.409 EAL: Calling mem event callback 'spdk:(nil)' 00:30:51.409 EAL: request: mp_malloc_sync 00:30:51.409 EAL: No shared files mode enabled, IPC is disabled 00:30:51.409 EAL: Heap on socket 0 was shrunk by 18MB 00:30:51.409 EAL: Trying to obtain current memory policy. 00:30:51.409 EAL: Setting policy MPOL_PREFERRED for socket 0 00:30:51.409 EAL: Restoring previous memory policy: 4 00:30:51.409 EAL: Calling mem event callback 'spdk:(nil)' 00:30:51.409 EAL: request: mp_malloc_sync 00:30:51.409 EAL: No shared files mode enabled, IPC is disabled 00:30:51.409 EAL: Heap on socket 0 was expanded by 34MB 00:30:51.409 EAL: Calling mem event callback 'spdk:(nil)' 00:30:51.409 EAL: request: mp_malloc_sync 00:30:51.409 EAL: No shared files mode enabled, IPC is disabled 00:30:51.409 EAL: Heap on socket 0 was shrunk by 34MB 00:30:51.409 EAL: Trying to obtain current memory policy. 00:30:51.409 EAL: Setting policy MPOL_PREFERRED for socket 0 00:30:51.409 EAL: Restoring previous memory policy: 4 00:30:51.409 EAL: Calling mem event callback 'spdk:(nil)' 00:30:51.409 EAL: request: mp_malloc_sync 00:30:51.409 EAL: No shared files mode enabled, IPC is disabled 00:30:51.409 EAL: Heap on socket 0 was expanded by 66MB 00:30:51.669 EAL: Calling mem event callback 'spdk:(nil)' 00:30:51.669 EAL: request: mp_malloc_sync 00:30:51.669 EAL: No shared files mode enabled, IPC is disabled 00:30:51.669 EAL: Heap on socket 0 was shrunk by 66MB 00:30:51.669 EAL: Trying to obtain current memory policy. 00:30:51.669 EAL: Setting policy MPOL_PREFERRED for socket 0 00:30:51.669 EAL: Restoring previous memory policy: 4 00:30:51.669 EAL: Calling mem event callback 'spdk:(nil)' 00:30:51.669 EAL: request: mp_malloc_sync 00:30:51.669 EAL: No shared files mode enabled, IPC is disabled 00:30:51.669 EAL: Heap on socket 0 was expanded by 130MB 00:30:51.669 EAL: Calling mem event callback 'spdk:(nil)' 00:30:51.669 EAL: request: mp_malloc_sync 00:30:51.669 EAL: No shared files mode enabled, IPC is disabled 00:30:51.669 EAL: Heap on socket 0 was shrunk by 130MB 00:30:51.669 EAL: Trying to obtain current memory policy. 00:30:51.669 EAL: Setting policy MPOL_PREFERRED for socket 0 00:30:51.669 EAL: Restoring previous memory policy: 4 00:30:51.669 EAL: Calling mem event callback 'spdk:(nil)' 00:30:51.669 EAL: request: mp_malloc_sync 00:30:51.669 EAL: No shared files mode enabled, IPC is disabled 00:30:51.669 EAL: Heap on socket 0 was expanded by 258MB 00:30:51.669 EAL: Calling mem event callback 'spdk:(nil)' 00:30:51.669 EAL: request: mp_malloc_sync 00:30:51.669 EAL: No shared files mode enabled, IPC is disabled 00:30:51.669 EAL: Heap on socket 0 was shrunk by 258MB 00:30:51.669 EAL: Trying to obtain current memory policy. 00:30:51.669 EAL: Setting policy MPOL_PREFERRED for socket 0 00:30:51.928 EAL: Restoring previous memory policy: 4 00:30:51.928 EAL: Calling mem event callback 'spdk:(nil)' 00:30:51.928 EAL: request: mp_malloc_sync 00:30:51.928 EAL: No shared files mode enabled, IPC is disabled 00:30:51.928 EAL: Heap on socket 0 was expanded by 514MB 00:30:51.928 EAL: Calling mem event callback 'spdk:(nil)' 00:30:52.186 EAL: request: mp_malloc_sync 00:30:52.186 EAL: No shared files mode enabled, IPC is disabled 00:30:52.186 EAL: Heap on socket 0 was shrunk by 514MB 00:30:52.186 EAL: Trying to obtain current memory policy. 00:30:52.186 EAL: Setting policy MPOL_PREFERRED for socket 0 00:30:52.446 EAL: Restoring previous memory policy: 4 00:30:52.446 EAL: Calling mem event callback 'spdk:(nil)' 00:30:52.446 EAL: request: mp_malloc_sync 00:30:52.446 EAL: No shared files mode enabled, IPC is disabled 00:30:52.446 EAL: Heap on socket 0 was expanded by 1026MB 00:30:52.705 EAL: Calling mem event callback 'spdk:(nil)' 00:30:52.964 passed 00:30:52.964 00:30:52.964 Run Summary: Type Total Ran Passed Failed Inactive 00:30:52.964 suites 1 1 n/a 0 0 00:30:52.964 tests 2 2 2 0 0 00:30:52.964 asserts 5218 5218 5218 0 n/a 00:30:52.964 00:30:52.964 Elapsed time = 1.310 seconds 00:30:52.964 EAL: request: mp_malloc_sync 00:30:52.964 EAL: No shared files mode enabled, IPC is disabled 00:30:52.964 EAL: Heap on socket 0 was shrunk by 1026MB 00:30:52.964 EAL: Calling mem event callback 'spdk:(nil)' 00:30:52.964 EAL: request: mp_malloc_sync 00:30:52.964 EAL: No shared files mode enabled, IPC is disabled 00:30:52.964 EAL: Heap on socket 0 was shrunk by 2MB 00:30:52.964 EAL: No shared files mode enabled, IPC is disabled 00:30:52.964 EAL: No shared files mode enabled, IPC is disabled 00:30:52.964 EAL: No shared files mode enabled, IPC is disabled 00:30:52.964 ************************************ 00:30:52.964 END TEST env_vtophys 00:30:52.964 ************************************ 00:30:52.964 00:30:52.964 real 0m1.511s 00:30:52.964 user 0m0.836s 00:30:52.964 sys 0m0.535s 00:30:52.964 12:53:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:52.964 12:53:12 -- common/autotest_common.sh@10 -- # set +x 00:30:52.964 12:53:12 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:30:52.964 12:53:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:52.964 12:53:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:52.964 12:53:12 -- common/autotest_common.sh@10 -- # set +x 00:30:52.964 ************************************ 00:30:52.964 START TEST env_pci 00:30:52.964 ************************************ 00:30:52.965 12:53:12 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:30:52.965 00:30:52.965 00:30:52.965 CUnit - A unit testing framework for C - Version 2.1-3 00:30:52.965 http://cunit.sourceforge.net/ 00:30:52.965 00:30:52.965 00:30:52.965 Suite: pci 00:30:52.965 Test: pci_hook ...[2024-07-22 12:53:12.207228] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 67230 has claimed it 00:30:52.965 passed 00:30:52.965 00:30:52.965 Run Summary: Type Total Ran Passed Failed Inactive 00:30:52.965 suites 1 1 n/a 0 0 00:30:52.965 tests 1 1 1 0 0 00:30:52.965 asserts 25 25 25 0 n/a 00:30:52.965 00:30:52.965 Elapsed time = 0.002 seconds 00:30:52.965 EAL: Cannot find device (10000:00:01.0) 00:30:52.965 EAL: Failed to attach device on primary process 00:30:52.965 ************************************ 00:30:52.965 END TEST env_pci 00:30:52.965 ************************************ 00:30:52.965 00:30:52.965 real 0m0.019s 00:30:52.965 user 0m0.011s 00:30:52.965 sys 0m0.007s 00:30:52.965 12:53:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:52.965 12:53:12 -- common/autotest_common.sh@10 -- # set +x 00:30:52.965 12:53:12 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:30:52.965 12:53:12 -- env/env.sh@15 -- # uname 00:30:52.965 12:53:12 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:30:52.965 12:53:12 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:30:52.965 12:53:12 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:30:52.965 12:53:12 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:30:52.965 12:53:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:52.965 12:53:12 -- common/autotest_common.sh@10 -- # set +x 00:30:52.965 ************************************ 00:30:52.965 START TEST env_dpdk_post_init 00:30:52.965 ************************************ 00:30:52.965 12:53:12 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:30:52.965 EAL: Detected CPU lcores: 10 00:30:52.965 EAL: Detected NUMA nodes: 1 00:30:52.965 EAL: Detected shared linkage of DPDK 00:30:52.965 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:30:52.965 EAL: Selected IOVA mode 'PA' 00:30:53.224 TELEMETRY: No legacy callbacks, legacy socket not created 00:30:53.224 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:30:53.224 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:07.0 (socket -1) 00:30:53.224 Starting DPDK initialization... 00:30:53.224 Starting SPDK post initialization... 00:30:53.224 SPDK NVMe probe 00:30:53.224 Attaching to 0000:00:06.0 00:30:53.224 Attaching to 0000:00:07.0 00:30:53.224 Attached to 0000:00:06.0 00:30:53.224 Attached to 0000:00:07.0 00:30:53.224 Cleaning up... 00:30:53.224 00:30:53.224 real 0m0.175s 00:30:53.224 user 0m0.040s 00:30:53.224 sys 0m0.035s 00:30:53.224 12:53:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:53.224 12:53:12 -- common/autotest_common.sh@10 -- # set +x 00:30:53.224 ************************************ 00:30:53.224 END TEST env_dpdk_post_init 00:30:53.224 ************************************ 00:30:53.224 12:53:12 -- env/env.sh@26 -- # uname 00:30:53.224 12:53:12 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:30:53.224 12:53:12 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:30:53.224 12:53:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:53.224 12:53:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:53.224 12:53:12 -- common/autotest_common.sh@10 -- # set +x 00:30:53.224 ************************************ 00:30:53.224 START TEST env_mem_callbacks 00:30:53.224 ************************************ 00:30:53.224 12:53:12 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:30:53.224 EAL: Detected CPU lcores: 10 00:30:53.224 EAL: Detected NUMA nodes: 1 00:30:53.224 EAL: Detected shared linkage of DPDK 00:30:53.224 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:30:53.224 EAL: Selected IOVA mode 'PA' 00:30:53.224 TELEMETRY: No legacy callbacks, legacy socket not created 00:30:53.224 00:30:53.224 00:30:53.224 CUnit - A unit testing framework for C - Version 2.1-3 00:30:53.224 http://cunit.sourceforge.net/ 00:30:53.224 00:30:53.224 00:30:53.224 Suite: memory 00:30:53.224 Test: test ... 00:30:53.224 register 0x200000200000 2097152 00:30:53.224 malloc 3145728 00:30:53.224 register 0x200000400000 4194304 00:30:53.224 buf 0x200000500000 len 3145728 PASSED 00:30:53.224 malloc 64 00:30:53.224 buf 0x2000004fff40 len 64 PASSED 00:30:53.224 malloc 4194304 00:30:53.224 register 0x200000800000 6291456 00:30:53.224 buf 0x200000a00000 len 4194304 PASSED 00:30:53.224 free 0x200000500000 3145728 00:30:53.224 free 0x2000004fff40 64 00:30:53.224 unregister 0x200000400000 4194304 PASSED 00:30:53.224 free 0x200000a00000 4194304 00:30:53.224 unregister 0x200000800000 6291456 PASSED 00:30:53.224 malloc 8388608 00:30:53.224 register 0x200000400000 10485760 00:30:53.224 buf 0x200000600000 len 8388608 PASSED 00:30:53.224 free 0x200000600000 8388608 00:30:53.224 unregister 0x200000400000 10485760 PASSED 00:30:53.224 passed 00:30:53.224 00:30:53.224 Run Summary: Type Total Ran Passed Failed Inactive 00:30:53.224 suites 1 1 n/a 0 0 00:30:53.224 tests 1 1 1 0 0 00:30:53.224 asserts 15 15 15 0 n/a 00:30:53.225 00:30:53.225 Elapsed time = 0.009 seconds 00:30:53.225 00:30:53.225 real 0m0.142s 00:30:53.225 user 0m0.015s 00:30:53.225 sys 0m0.026s 00:30:53.225 12:53:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:53.225 12:53:12 -- common/autotest_common.sh@10 -- # set +x 00:30:53.225 ************************************ 00:30:53.225 END TEST env_mem_callbacks 00:30:53.225 ************************************ 00:30:53.484 00:30:53.484 real 0m2.420s 00:30:53.484 user 0m1.231s 00:30:53.484 sys 0m0.833s 00:30:53.484 12:53:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:53.484 12:53:12 -- common/autotest_common.sh@10 -- # set +x 00:30:53.484 ************************************ 00:30:53.484 END TEST env 00:30:53.484 ************************************ 00:30:53.484 12:53:12 -- spdk/autotest.sh@176 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:30:53.484 12:53:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:53.484 12:53:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:53.484 12:53:12 -- common/autotest_common.sh@10 -- # set +x 00:30:53.484 ************************************ 00:30:53.484 START TEST rpc 00:30:53.484 ************************************ 00:30:53.484 12:53:12 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:30:53.484 * Looking for test storage... 00:30:53.484 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:30:53.484 12:53:12 -- rpc/rpc.sh@65 -- # spdk_pid=67334 00:30:53.484 12:53:12 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:30:53.484 12:53:12 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:30:53.484 12:53:12 -- rpc/rpc.sh@67 -- # waitforlisten 67334 00:30:53.484 12:53:12 -- common/autotest_common.sh@819 -- # '[' -z 67334 ']' 00:30:53.484 12:53:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:53.484 12:53:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:53.484 12:53:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:53.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:53.484 12:53:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:53.484 12:53:12 -- common/autotest_common.sh@10 -- # set +x 00:30:53.484 [2024-07-22 12:53:12.903829] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:30:53.484 [2024-07-22 12:53:12.903989] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67334 ] 00:30:53.745 [2024-07-22 12:53:13.053553] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:53.745 [2024-07-22 12:53:13.142162] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:53.745 [2024-07-22 12:53:13.142344] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:30:53.745 [2024-07-22 12:53:13.142378] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 67334' to capture a snapshot of events at runtime. 00:30:53.746 [2024-07-22 12:53:13.142401] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid67334 for offline analysis/debug. 00:30:53.746 [2024-07-22 12:53:13.142437] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:54.685 12:53:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:54.685 12:53:13 -- common/autotest_common.sh@852 -- # return 0 00:30:54.685 12:53:13 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:30:54.685 12:53:13 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:30:54.685 12:53:13 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:30:54.685 12:53:13 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:30:54.685 12:53:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:54.685 12:53:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:54.685 12:53:13 -- common/autotest_common.sh@10 -- # set +x 00:30:54.685 ************************************ 00:30:54.685 START TEST rpc_integrity 00:30:54.685 ************************************ 00:30:54.685 12:53:13 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:30:54.685 12:53:13 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:30:54.685 12:53:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:54.685 12:53:13 -- common/autotest_common.sh@10 -- # set +x 00:30:54.685 12:53:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:54.685 12:53:13 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:30:54.685 12:53:13 -- rpc/rpc.sh@13 -- # jq length 00:30:54.685 12:53:13 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:30:54.685 12:53:13 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:30:54.685 12:53:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:54.685 12:53:13 -- common/autotest_common.sh@10 -- # set +x 00:30:54.685 12:53:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:54.685 12:53:13 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:30:54.685 12:53:13 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:30:54.685 12:53:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:54.685 12:53:13 -- common/autotest_common.sh@10 -- # set +x 00:30:54.685 12:53:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:54.685 12:53:13 -- rpc/rpc.sh@16 -- # bdevs='[ 00:30:54.685 { 00:30:54.685 "aliases": [ 00:30:54.685 "5121ba93-20c0-4a43-a8e7-126be2eb5752" 00:30:54.685 ], 00:30:54.685 "assigned_rate_limits": { 00:30:54.685 "r_mbytes_per_sec": 0, 00:30:54.685 "rw_ios_per_sec": 0, 00:30:54.685 "rw_mbytes_per_sec": 0, 00:30:54.685 "w_mbytes_per_sec": 0 00:30:54.685 }, 00:30:54.685 "block_size": 512, 00:30:54.685 "claimed": false, 00:30:54.685 "driver_specific": {}, 00:30:54.685 "memory_domains": [ 00:30:54.685 { 00:30:54.685 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:54.685 "dma_device_type": 2 00:30:54.685 } 00:30:54.685 ], 00:30:54.685 "name": "Malloc0", 00:30:54.685 "num_blocks": 16384, 00:30:54.685 "product_name": "Malloc disk", 00:30:54.685 "supported_io_types": { 00:30:54.685 "abort": true, 00:30:54.685 "compare": false, 00:30:54.685 "compare_and_write": false, 00:30:54.685 "flush": true, 00:30:54.685 "nvme_admin": false, 00:30:54.685 "nvme_io": false, 00:30:54.685 "read": true, 00:30:54.686 "reset": true, 00:30:54.686 "unmap": true, 00:30:54.686 "write": true, 00:30:54.686 "write_zeroes": true 00:30:54.686 }, 00:30:54.686 "uuid": "5121ba93-20c0-4a43-a8e7-126be2eb5752", 00:30:54.686 "zoned": false 00:30:54.686 } 00:30:54.686 ]' 00:30:54.686 12:53:13 -- rpc/rpc.sh@17 -- # jq length 00:30:54.686 12:53:14 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:30:54.686 12:53:14 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:30:54.686 12:53:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:54.686 12:53:14 -- common/autotest_common.sh@10 -- # set +x 00:30:54.686 [2024-07-22 12:53:14.041429] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:30:54.686 [2024-07-22 12:53:14.041479] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:54.686 [2024-07-22 12:53:14.041496] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x8554d0 00:30:54.686 [2024-07-22 12:53:14.041506] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:54.686 [2024-07-22 12:53:14.043050] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:54.686 [2024-07-22 12:53:14.043085] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:30:54.686 Passthru0 00:30:54.686 12:53:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:54.686 12:53:14 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:30:54.686 12:53:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:54.686 12:53:14 -- common/autotest_common.sh@10 -- # set +x 00:30:54.686 12:53:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:54.686 12:53:14 -- rpc/rpc.sh@20 -- # bdevs='[ 00:30:54.686 { 00:30:54.686 "aliases": [ 00:30:54.686 "5121ba93-20c0-4a43-a8e7-126be2eb5752" 00:30:54.686 ], 00:30:54.686 "assigned_rate_limits": { 00:30:54.686 "r_mbytes_per_sec": 0, 00:30:54.686 "rw_ios_per_sec": 0, 00:30:54.686 "rw_mbytes_per_sec": 0, 00:30:54.686 "w_mbytes_per_sec": 0 00:30:54.686 }, 00:30:54.686 "block_size": 512, 00:30:54.686 "claim_type": "exclusive_write", 00:30:54.686 "claimed": true, 00:30:54.686 "driver_specific": {}, 00:30:54.686 "memory_domains": [ 00:30:54.686 { 00:30:54.686 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:54.686 "dma_device_type": 2 00:30:54.686 } 00:30:54.686 ], 00:30:54.686 "name": "Malloc0", 00:30:54.686 "num_blocks": 16384, 00:30:54.686 "product_name": "Malloc disk", 00:30:54.686 "supported_io_types": { 00:30:54.686 "abort": true, 00:30:54.686 "compare": false, 00:30:54.686 "compare_and_write": false, 00:30:54.686 "flush": true, 00:30:54.686 "nvme_admin": false, 00:30:54.686 "nvme_io": false, 00:30:54.686 "read": true, 00:30:54.686 "reset": true, 00:30:54.686 "unmap": true, 00:30:54.686 "write": true, 00:30:54.686 "write_zeroes": true 00:30:54.686 }, 00:30:54.686 "uuid": "5121ba93-20c0-4a43-a8e7-126be2eb5752", 00:30:54.686 "zoned": false 00:30:54.686 }, 00:30:54.686 { 00:30:54.686 "aliases": [ 00:30:54.686 "25093366-b485-57b5-90e4-8898dad40842" 00:30:54.686 ], 00:30:54.686 "assigned_rate_limits": { 00:30:54.686 "r_mbytes_per_sec": 0, 00:30:54.686 "rw_ios_per_sec": 0, 00:30:54.686 "rw_mbytes_per_sec": 0, 00:30:54.686 "w_mbytes_per_sec": 0 00:30:54.686 }, 00:30:54.686 "block_size": 512, 00:30:54.686 "claimed": false, 00:30:54.686 "driver_specific": { 00:30:54.686 "passthru": { 00:30:54.686 "base_bdev_name": "Malloc0", 00:30:54.686 "name": "Passthru0" 00:30:54.686 } 00:30:54.686 }, 00:30:54.686 "memory_domains": [ 00:30:54.686 { 00:30:54.686 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:54.686 "dma_device_type": 2 00:30:54.686 } 00:30:54.686 ], 00:30:54.686 "name": "Passthru0", 00:30:54.686 "num_blocks": 16384, 00:30:54.686 "product_name": "passthru", 00:30:54.686 "supported_io_types": { 00:30:54.686 "abort": true, 00:30:54.686 "compare": false, 00:30:54.686 "compare_and_write": false, 00:30:54.686 "flush": true, 00:30:54.686 "nvme_admin": false, 00:30:54.686 "nvme_io": false, 00:30:54.686 "read": true, 00:30:54.686 "reset": true, 00:30:54.686 "unmap": true, 00:30:54.686 "write": true, 00:30:54.686 "write_zeroes": true 00:30:54.686 }, 00:30:54.686 "uuid": "25093366-b485-57b5-90e4-8898dad40842", 00:30:54.686 "zoned": false 00:30:54.686 } 00:30:54.686 ]' 00:30:54.686 12:53:14 -- rpc/rpc.sh@21 -- # jq length 00:30:54.946 12:53:14 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:30:54.946 12:53:14 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:30:54.946 12:53:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:54.946 12:53:14 -- common/autotest_common.sh@10 -- # set +x 00:30:54.946 12:53:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:54.946 12:53:14 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:30:54.946 12:53:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:54.946 12:53:14 -- common/autotest_common.sh@10 -- # set +x 00:30:54.946 12:53:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:54.946 12:53:14 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:30:54.946 12:53:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:54.946 12:53:14 -- common/autotest_common.sh@10 -- # set +x 00:30:54.946 12:53:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:54.946 12:53:14 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:30:54.946 12:53:14 -- rpc/rpc.sh@26 -- # jq length 00:30:54.946 12:53:14 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:30:54.946 00:30:54.946 real 0m0.328s 00:30:54.946 user 0m0.215s 00:30:54.946 sys 0m0.038s 00:30:54.946 12:53:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:54.946 12:53:14 -- common/autotest_common.sh@10 -- # set +x 00:30:54.947 ************************************ 00:30:54.947 END TEST rpc_integrity 00:30:54.947 ************************************ 00:30:54.947 12:53:14 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:30:54.947 12:53:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:54.947 12:53:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:54.947 12:53:14 -- common/autotest_common.sh@10 -- # set +x 00:30:54.947 ************************************ 00:30:54.947 START TEST rpc_plugins 00:30:54.947 ************************************ 00:30:54.947 12:53:14 -- common/autotest_common.sh@1104 -- # rpc_plugins 00:30:54.947 12:53:14 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:30:54.947 12:53:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:54.947 12:53:14 -- common/autotest_common.sh@10 -- # set +x 00:30:54.947 12:53:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:54.947 12:53:14 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:30:54.947 12:53:14 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:30:54.947 12:53:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:54.947 12:53:14 -- common/autotest_common.sh@10 -- # set +x 00:30:54.947 12:53:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:54.947 12:53:14 -- rpc/rpc.sh@31 -- # bdevs='[ 00:30:54.947 { 00:30:54.947 "aliases": [ 00:30:54.947 "aa088d43-afe0-43eb-83f6-6da8d63c92b6" 00:30:54.947 ], 00:30:54.947 "assigned_rate_limits": { 00:30:54.947 "r_mbytes_per_sec": 0, 00:30:54.947 "rw_ios_per_sec": 0, 00:30:54.947 "rw_mbytes_per_sec": 0, 00:30:54.947 "w_mbytes_per_sec": 0 00:30:54.947 }, 00:30:54.947 "block_size": 4096, 00:30:54.947 "claimed": false, 00:30:54.947 "driver_specific": {}, 00:30:54.947 "memory_domains": [ 00:30:54.947 { 00:30:54.947 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:54.947 "dma_device_type": 2 00:30:54.947 } 00:30:54.947 ], 00:30:54.947 "name": "Malloc1", 00:30:54.947 "num_blocks": 256, 00:30:54.947 "product_name": "Malloc disk", 00:30:54.947 "supported_io_types": { 00:30:54.947 "abort": true, 00:30:54.947 "compare": false, 00:30:54.947 "compare_and_write": false, 00:30:54.947 "flush": true, 00:30:54.947 "nvme_admin": false, 00:30:54.947 "nvme_io": false, 00:30:54.947 "read": true, 00:30:54.947 "reset": true, 00:30:54.947 "unmap": true, 00:30:54.947 "write": true, 00:30:54.947 "write_zeroes": true 00:30:54.947 }, 00:30:54.947 "uuid": "aa088d43-afe0-43eb-83f6-6da8d63c92b6", 00:30:54.947 "zoned": false 00:30:54.947 } 00:30:54.947 ]' 00:30:54.947 12:53:14 -- rpc/rpc.sh@32 -- # jq length 00:30:54.947 12:53:14 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:30:54.947 12:53:14 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:30:54.947 12:53:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:54.947 12:53:14 -- common/autotest_common.sh@10 -- # set +x 00:30:54.947 12:53:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:54.947 12:53:14 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:30:54.947 12:53:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:54.947 12:53:14 -- common/autotest_common.sh@10 -- # set +x 00:30:54.947 12:53:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:54.947 12:53:14 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:30:54.947 12:53:14 -- rpc/rpc.sh@36 -- # jq length 00:30:55.206 12:53:14 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:30:55.206 00:30:55.206 real 0m0.175s 00:30:55.206 user 0m0.128s 00:30:55.206 sys 0m0.014s 00:30:55.206 12:53:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:55.206 12:53:14 -- common/autotest_common.sh@10 -- # set +x 00:30:55.206 ************************************ 00:30:55.206 END TEST rpc_plugins 00:30:55.206 ************************************ 00:30:55.206 12:53:14 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:30:55.206 12:53:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:55.206 12:53:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:55.206 12:53:14 -- common/autotest_common.sh@10 -- # set +x 00:30:55.206 ************************************ 00:30:55.206 START TEST rpc_trace_cmd_test 00:30:55.206 ************************************ 00:30:55.206 12:53:14 -- common/autotest_common.sh@1104 -- # rpc_trace_cmd_test 00:30:55.206 12:53:14 -- rpc/rpc.sh@40 -- # local info 00:30:55.206 12:53:14 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:30:55.206 12:53:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:55.206 12:53:14 -- common/autotest_common.sh@10 -- # set +x 00:30:55.206 12:53:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:55.206 12:53:14 -- rpc/rpc.sh@42 -- # info='{ 00:30:55.206 "bdev": { 00:30:55.206 "mask": "0x8", 00:30:55.206 "tpoint_mask": "0xffffffffffffffff" 00:30:55.206 }, 00:30:55.206 "bdev_nvme": { 00:30:55.206 "mask": "0x4000", 00:30:55.206 "tpoint_mask": "0x0" 00:30:55.206 }, 00:30:55.206 "blobfs": { 00:30:55.207 "mask": "0x80", 00:30:55.207 "tpoint_mask": "0x0" 00:30:55.207 }, 00:30:55.207 "dsa": { 00:30:55.207 "mask": "0x200", 00:30:55.207 "tpoint_mask": "0x0" 00:30:55.207 }, 00:30:55.207 "ftl": { 00:30:55.207 "mask": "0x40", 00:30:55.207 "tpoint_mask": "0x0" 00:30:55.207 }, 00:30:55.207 "iaa": { 00:30:55.207 "mask": "0x1000", 00:30:55.207 "tpoint_mask": "0x0" 00:30:55.207 }, 00:30:55.207 "iscsi_conn": { 00:30:55.207 "mask": "0x2", 00:30:55.207 "tpoint_mask": "0x0" 00:30:55.207 }, 00:30:55.207 "nvme_pcie": { 00:30:55.207 "mask": "0x800", 00:30:55.207 "tpoint_mask": "0x0" 00:30:55.207 }, 00:30:55.207 "nvme_tcp": { 00:30:55.207 "mask": "0x2000", 00:30:55.207 "tpoint_mask": "0x0" 00:30:55.207 }, 00:30:55.207 "nvmf_rdma": { 00:30:55.207 "mask": "0x10", 00:30:55.207 "tpoint_mask": "0x0" 00:30:55.207 }, 00:30:55.207 "nvmf_tcp": { 00:30:55.207 "mask": "0x20", 00:30:55.207 "tpoint_mask": "0x0" 00:30:55.207 }, 00:30:55.207 "scsi": { 00:30:55.207 "mask": "0x4", 00:30:55.207 "tpoint_mask": "0x0" 00:30:55.207 }, 00:30:55.207 "thread": { 00:30:55.207 "mask": "0x400", 00:30:55.207 "tpoint_mask": "0x0" 00:30:55.207 }, 00:30:55.207 "tpoint_group_mask": "0x8", 00:30:55.207 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid67334" 00:30:55.207 }' 00:30:55.207 12:53:14 -- rpc/rpc.sh@43 -- # jq length 00:30:55.207 12:53:14 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:30:55.207 12:53:14 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:30:55.207 12:53:14 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:30:55.207 12:53:14 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:30:55.467 12:53:14 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:30:55.467 12:53:14 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:30:55.467 12:53:14 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:30:55.467 12:53:14 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:30:55.467 12:53:14 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:30:55.467 00:30:55.467 real 0m0.281s 00:30:55.467 user 0m0.248s 00:30:55.467 sys 0m0.023s 00:30:55.467 12:53:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:55.467 ************************************ 00:30:55.467 END TEST rpc_trace_cmd_test 00:30:55.467 12:53:14 -- common/autotest_common.sh@10 -- # set +x 00:30:55.467 ************************************ 00:30:55.467 12:53:14 -- rpc/rpc.sh@76 -- # [[ 1 -eq 1 ]] 00:30:55.467 12:53:14 -- rpc/rpc.sh@77 -- # run_test go_rpc go_rpc 00:30:55.467 12:53:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:55.467 12:53:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:55.467 12:53:14 -- common/autotest_common.sh@10 -- # set +x 00:30:55.467 ************************************ 00:30:55.467 START TEST go_rpc 00:30:55.467 ************************************ 00:30:55.467 12:53:14 -- common/autotest_common.sh@1104 -- # go_rpc 00:30:55.467 12:53:14 -- rpc/rpc.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:30:55.467 12:53:14 -- rpc/rpc.sh@51 -- # bdevs='[]' 00:30:55.467 12:53:14 -- rpc/rpc.sh@52 -- # jq length 00:30:55.467 12:53:14 -- rpc/rpc.sh@52 -- # '[' 0 == 0 ']' 00:30:55.467 12:53:14 -- rpc/rpc.sh@54 -- # rpc_cmd bdev_malloc_create 8 512 00:30:55.467 12:53:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:55.727 12:53:14 -- common/autotest_common.sh@10 -- # set +x 00:30:55.727 12:53:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:55.727 12:53:14 -- rpc/rpc.sh@54 -- # malloc=Malloc2 00:30:55.727 12:53:14 -- rpc/rpc.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:30:55.727 12:53:14 -- rpc/rpc.sh@56 -- # bdevs='[{"aliases":["d464e41f-ab83-49da-a385-69600b851485"],"assigned_rate_limits":{"r_mbytes_per_sec":0,"rw_ios_per_sec":0,"rw_mbytes_per_sec":0,"w_mbytes_per_sec":0},"block_size":512,"claimed":false,"driver_specific":{},"memory_domains":[{"dma_device_id":"SPDK_ACCEL_DMA_DEVICE","dma_device_type":2}],"name":"Malloc2","num_blocks":16384,"product_name":"Malloc disk","supported_io_types":{"abort":true,"compare":false,"compare_and_write":false,"flush":true,"nvme_admin":false,"nvme_io":false,"read":true,"reset":true,"unmap":true,"write":true,"write_zeroes":true},"uuid":"d464e41f-ab83-49da-a385-69600b851485","zoned":false}]' 00:30:55.727 12:53:14 -- rpc/rpc.sh@57 -- # jq length 00:30:55.727 12:53:14 -- rpc/rpc.sh@57 -- # '[' 1 == 1 ']' 00:30:55.727 12:53:14 -- rpc/rpc.sh@59 -- # rpc_cmd bdev_malloc_delete Malloc2 00:30:55.727 12:53:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:55.727 12:53:14 -- common/autotest_common.sh@10 -- # set +x 00:30:55.727 12:53:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:55.727 12:53:14 -- rpc/rpc.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:30:55.727 12:53:14 -- rpc/rpc.sh@60 -- # bdevs='[]' 00:30:55.727 12:53:14 -- rpc/rpc.sh@61 -- # jq length 00:30:55.727 ************************************ 00:30:55.727 END TEST go_rpc 00:30:55.727 ************************************ 00:30:55.727 12:53:15 -- rpc/rpc.sh@61 -- # '[' 0 == 0 ']' 00:30:55.727 00:30:55.727 real 0m0.225s 00:30:55.727 user 0m0.145s 00:30:55.727 sys 0m0.042s 00:30:55.727 12:53:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:55.727 12:53:15 -- common/autotest_common.sh@10 -- # set +x 00:30:55.727 12:53:15 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:30:55.727 12:53:15 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:30:55.727 12:53:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:55.727 12:53:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:55.727 12:53:15 -- common/autotest_common.sh@10 -- # set +x 00:30:55.727 ************************************ 00:30:55.727 START TEST rpc_daemon_integrity 00:30:55.727 ************************************ 00:30:55.727 12:53:15 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:30:55.727 12:53:15 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:30:55.727 12:53:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:55.727 12:53:15 -- common/autotest_common.sh@10 -- # set +x 00:30:55.727 12:53:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:55.727 12:53:15 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:30:55.727 12:53:15 -- rpc/rpc.sh@13 -- # jq length 00:30:55.987 12:53:15 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:30:55.987 12:53:15 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:30:55.987 12:53:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:55.987 12:53:15 -- common/autotest_common.sh@10 -- # set +x 00:30:55.987 12:53:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:55.987 12:53:15 -- rpc/rpc.sh@15 -- # malloc=Malloc3 00:30:55.987 12:53:15 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:30:55.987 12:53:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:55.987 12:53:15 -- common/autotest_common.sh@10 -- # set +x 00:30:55.987 12:53:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:55.987 12:53:15 -- rpc/rpc.sh@16 -- # bdevs='[ 00:30:55.987 { 00:30:55.987 "aliases": [ 00:30:55.987 "060abd52-a773-4c29-8ac8-5a3ef53173d4" 00:30:55.987 ], 00:30:55.987 "assigned_rate_limits": { 00:30:55.987 "r_mbytes_per_sec": 0, 00:30:55.987 "rw_ios_per_sec": 0, 00:30:55.987 "rw_mbytes_per_sec": 0, 00:30:55.987 "w_mbytes_per_sec": 0 00:30:55.987 }, 00:30:55.987 "block_size": 512, 00:30:55.987 "claimed": false, 00:30:55.987 "driver_specific": {}, 00:30:55.987 "memory_domains": [ 00:30:55.987 { 00:30:55.987 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:55.987 "dma_device_type": 2 00:30:55.987 } 00:30:55.987 ], 00:30:55.987 "name": "Malloc3", 00:30:55.987 "num_blocks": 16384, 00:30:55.987 "product_name": "Malloc disk", 00:30:55.987 "supported_io_types": { 00:30:55.987 "abort": true, 00:30:55.987 "compare": false, 00:30:55.987 "compare_and_write": false, 00:30:55.987 "flush": true, 00:30:55.987 "nvme_admin": false, 00:30:55.987 "nvme_io": false, 00:30:55.987 "read": true, 00:30:55.987 "reset": true, 00:30:55.987 "unmap": true, 00:30:55.987 "write": true, 00:30:55.987 "write_zeroes": true 00:30:55.987 }, 00:30:55.987 "uuid": "060abd52-a773-4c29-8ac8-5a3ef53173d4", 00:30:55.987 "zoned": false 00:30:55.987 } 00:30:55.987 ]' 00:30:55.987 12:53:15 -- rpc/rpc.sh@17 -- # jq length 00:30:55.987 12:53:15 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:30:55.987 12:53:15 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc3 -p Passthru0 00:30:55.987 12:53:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:55.987 12:53:15 -- common/autotest_common.sh@10 -- # set +x 00:30:55.987 [2024-07-22 12:53:15.254912] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:30:55.987 [2024-07-22 12:53:15.254973] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:55.987 [2024-07-22 12:53:15.254989] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x847250 00:30:55.987 [2024-07-22 12:53:15.254998] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:55.987 [2024-07-22 12:53:15.256328] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:55.987 [2024-07-22 12:53:15.256362] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:30:55.987 Passthru0 00:30:55.987 12:53:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:55.987 12:53:15 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:30:55.987 12:53:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:55.987 12:53:15 -- common/autotest_common.sh@10 -- # set +x 00:30:55.987 12:53:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:55.987 12:53:15 -- rpc/rpc.sh@20 -- # bdevs='[ 00:30:55.987 { 00:30:55.987 "aliases": [ 00:30:55.987 "060abd52-a773-4c29-8ac8-5a3ef53173d4" 00:30:55.987 ], 00:30:55.987 "assigned_rate_limits": { 00:30:55.987 "r_mbytes_per_sec": 0, 00:30:55.987 "rw_ios_per_sec": 0, 00:30:55.987 "rw_mbytes_per_sec": 0, 00:30:55.987 "w_mbytes_per_sec": 0 00:30:55.987 }, 00:30:55.987 "block_size": 512, 00:30:55.987 "claim_type": "exclusive_write", 00:30:55.987 "claimed": true, 00:30:55.987 "driver_specific": {}, 00:30:55.987 "memory_domains": [ 00:30:55.987 { 00:30:55.987 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:55.987 "dma_device_type": 2 00:30:55.987 } 00:30:55.987 ], 00:30:55.987 "name": "Malloc3", 00:30:55.987 "num_blocks": 16384, 00:30:55.987 "product_name": "Malloc disk", 00:30:55.987 "supported_io_types": { 00:30:55.987 "abort": true, 00:30:55.987 "compare": false, 00:30:55.987 "compare_and_write": false, 00:30:55.987 "flush": true, 00:30:55.987 "nvme_admin": false, 00:30:55.987 "nvme_io": false, 00:30:55.987 "read": true, 00:30:55.987 "reset": true, 00:30:55.987 "unmap": true, 00:30:55.987 "write": true, 00:30:55.987 "write_zeroes": true 00:30:55.987 }, 00:30:55.987 "uuid": "060abd52-a773-4c29-8ac8-5a3ef53173d4", 00:30:55.987 "zoned": false 00:30:55.987 }, 00:30:55.987 { 00:30:55.987 "aliases": [ 00:30:55.987 "5272c908-be65-567b-b996-1da86fca3401" 00:30:55.987 ], 00:30:55.987 "assigned_rate_limits": { 00:30:55.987 "r_mbytes_per_sec": 0, 00:30:55.987 "rw_ios_per_sec": 0, 00:30:55.987 "rw_mbytes_per_sec": 0, 00:30:55.987 "w_mbytes_per_sec": 0 00:30:55.987 }, 00:30:55.987 "block_size": 512, 00:30:55.987 "claimed": false, 00:30:55.987 "driver_specific": { 00:30:55.987 "passthru": { 00:30:55.987 "base_bdev_name": "Malloc3", 00:30:55.987 "name": "Passthru0" 00:30:55.987 } 00:30:55.987 }, 00:30:55.987 "memory_domains": [ 00:30:55.987 { 00:30:55.987 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:55.987 "dma_device_type": 2 00:30:55.987 } 00:30:55.987 ], 00:30:55.987 "name": "Passthru0", 00:30:55.987 "num_blocks": 16384, 00:30:55.987 "product_name": "passthru", 00:30:55.987 "supported_io_types": { 00:30:55.987 "abort": true, 00:30:55.987 "compare": false, 00:30:55.987 "compare_and_write": false, 00:30:55.987 "flush": true, 00:30:55.988 "nvme_admin": false, 00:30:55.988 "nvme_io": false, 00:30:55.988 "read": true, 00:30:55.988 "reset": true, 00:30:55.988 "unmap": true, 00:30:55.988 "write": true, 00:30:55.988 "write_zeroes": true 00:30:55.988 }, 00:30:55.988 "uuid": "5272c908-be65-567b-b996-1da86fca3401", 00:30:55.988 "zoned": false 00:30:55.988 } 00:30:55.988 ]' 00:30:55.988 12:53:15 -- rpc/rpc.sh@21 -- # jq length 00:30:55.988 12:53:15 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:30:55.988 12:53:15 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:30:55.988 12:53:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:55.988 12:53:15 -- common/autotest_common.sh@10 -- # set +x 00:30:55.988 12:53:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:55.988 12:53:15 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc3 00:30:55.988 12:53:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:55.988 12:53:15 -- common/autotest_common.sh@10 -- # set +x 00:30:55.988 12:53:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:55.988 12:53:15 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:30:55.988 12:53:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:55.988 12:53:15 -- common/autotest_common.sh@10 -- # set +x 00:30:55.988 12:53:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:55.988 12:53:15 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:30:55.988 12:53:15 -- rpc/rpc.sh@26 -- # jq length 00:30:56.247 ************************************ 00:30:56.247 END TEST rpc_daemon_integrity 00:30:56.247 ************************************ 00:30:56.247 12:53:15 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:30:56.247 00:30:56.247 real 0m0.316s 00:30:56.247 user 0m0.211s 00:30:56.247 sys 0m0.036s 00:30:56.247 12:53:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:56.247 12:53:15 -- common/autotest_common.sh@10 -- # set +x 00:30:56.247 12:53:15 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:30:56.247 12:53:15 -- rpc/rpc.sh@84 -- # killprocess 67334 00:30:56.247 12:53:15 -- common/autotest_common.sh@926 -- # '[' -z 67334 ']' 00:30:56.247 12:53:15 -- common/autotest_common.sh@930 -- # kill -0 67334 00:30:56.247 12:53:15 -- common/autotest_common.sh@931 -- # uname 00:30:56.247 12:53:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:56.247 12:53:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 67334 00:30:56.247 killing process with pid 67334 00:30:56.247 12:53:15 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:30:56.247 12:53:15 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:30:56.247 12:53:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 67334' 00:30:56.247 12:53:15 -- common/autotest_common.sh@945 -- # kill 67334 00:30:56.247 12:53:15 -- common/autotest_common.sh@950 -- # wait 67334 00:30:56.506 00:30:56.506 real 0m3.126s 00:30:56.506 user 0m4.138s 00:30:56.506 sys 0m0.780s 00:30:56.506 12:53:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:56.506 ************************************ 00:30:56.506 END TEST rpc 00:30:56.506 ************************************ 00:30:56.506 12:53:15 -- common/autotest_common.sh@10 -- # set +x 00:30:56.506 12:53:15 -- spdk/autotest.sh@177 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:30:56.506 12:53:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:56.506 12:53:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:56.506 12:53:15 -- common/autotest_common.sh@10 -- # set +x 00:30:56.506 ************************************ 00:30:56.506 START TEST rpc_client 00:30:56.506 ************************************ 00:30:56.506 12:53:15 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:30:56.766 * Looking for test storage... 00:30:56.766 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:30:56.766 12:53:15 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:30:56.766 OK 00:30:56.766 12:53:15 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:30:56.766 00:30:56.766 real 0m0.097s 00:30:56.766 user 0m0.043s 00:30:56.766 sys 0m0.061s 00:30:56.766 12:53:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:56.766 12:53:15 -- common/autotest_common.sh@10 -- # set +x 00:30:56.766 ************************************ 00:30:56.766 END TEST rpc_client 00:30:56.766 ************************************ 00:30:56.766 12:53:16 -- spdk/autotest.sh@178 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:30:56.766 12:53:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:56.766 12:53:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:56.766 12:53:16 -- common/autotest_common.sh@10 -- # set +x 00:30:56.766 ************************************ 00:30:56.766 START TEST json_config 00:30:56.766 ************************************ 00:30:56.766 12:53:16 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:30:56.766 12:53:16 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:30:56.766 12:53:16 -- nvmf/common.sh@7 -- # uname -s 00:30:56.766 12:53:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:56.766 12:53:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:56.766 12:53:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:56.766 12:53:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:56.766 12:53:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:56.766 12:53:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:56.766 12:53:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:56.766 12:53:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:56.766 12:53:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:56.766 12:53:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:56.766 12:53:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:864ae4a3-cd96-439b-8ef2-6dab4d992115 00:30:56.766 12:53:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=864ae4a3-cd96-439b-8ef2-6dab4d992115 00:30:56.766 12:53:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:56.766 12:53:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:56.766 12:53:16 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:30:56.766 12:53:16 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:56.766 12:53:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:56.766 12:53:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:56.766 12:53:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:56.767 12:53:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:56.767 12:53:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:56.767 12:53:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:56.767 12:53:16 -- paths/export.sh@5 -- # export PATH 00:30:56.767 12:53:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:56.767 12:53:16 -- nvmf/common.sh@46 -- # : 0 00:30:56.767 12:53:16 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:30:56.767 12:53:16 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:30:56.767 12:53:16 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:30:56.767 12:53:16 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:56.767 12:53:16 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:56.767 12:53:16 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:30:56.767 12:53:16 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:30:56.767 12:53:16 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:30:56.767 12:53:16 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:30:56.767 12:53:16 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:30:56.767 12:53:16 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:30:56.767 12:53:16 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:30:56.767 12:53:16 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:30:56.767 12:53:16 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:30:56.767 12:53:16 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:30:56.767 12:53:16 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:30:56.767 12:53:16 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:30:56.767 12:53:16 -- json_config/json_config.sh@32 -- # declare -A app_params 00:30:56.767 12:53:16 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:30:56.767 12:53:16 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:30:56.767 12:53:16 -- json_config/json_config.sh@43 -- # last_event_id=0 00:30:56.767 12:53:16 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:30:56.767 INFO: JSON configuration test init 00:30:56.767 12:53:16 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:30:56.767 12:53:16 -- json_config/json_config.sh@420 -- # json_config_test_init 00:30:56.767 12:53:16 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:30:56.767 12:53:16 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:56.767 12:53:16 -- common/autotest_common.sh@10 -- # set +x 00:30:56.767 12:53:16 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:30:56.767 12:53:16 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:56.767 12:53:16 -- common/autotest_common.sh@10 -- # set +x 00:30:56.767 12:53:16 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:30:56.767 12:53:16 -- json_config/json_config.sh@98 -- # local app=target 00:30:56.767 12:53:16 -- json_config/json_config.sh@99 -- # shift 00:30:56.767 12:53:16 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:30:56.767 12:53:16 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:30:56.767 12:53:16 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:30:56.767 12:53:16 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:30:56.767 12:53:16 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:30:56.767 12:53:16 -- json_config/json_config.sh@111 -- # app_pid[$app]=67634 00:30:56.767 Waiting for target to run... 00:30:56.767 12:53:16 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:30:56.767 12:53:16 -- json_config/json_config.sh@114 -- # waitforlisten 67634 /var/tmp/spdk_tgt.sock 00:30:56.767 12:53:16 -- common/autotest_common.sh@819 -- # '[' -z 67634 ']' 00:30:56.767 12:53:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:30:56.767 12:53:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:56.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:30:56.767 12:53:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:30:56.767 12:53:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:56.767 12:53:16 -- common/autotest_common.sh@10 -- # set +x 00:30:56.767 12:53:16 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:30:57.026 [2024-07-22 12:53:16.207917] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:30:57.026 [2024-07-22 12:53:16.208022] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67634 ] 00:30:57.286 [2024-07-22 12:53:16.637358] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:57.286 [2024-07-22 12:53:16.703489] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:57.286 [2024-07-22 12:53:16.703687] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:57.854 12:53:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:57.854 12:53:17 -- common/autotest_common.sh@852 -- # return 0 00:30:57.854 00:30:57.854 12:53:17 -- json_config/json_config.sh@115 -- # echo '' 00:30:57.854 12:53:17 -- json_config/json_config.sh@322 -- # create_accel_config 00:30:57.854 12:53:17 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:30:57.854 12:53:17 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:57.854 12:53:17 -- common/autotest_common.sh@10 -- # set +x 00:30:57.854 12:53:17 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:30:57.854 12:53:17 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:30:57.854 12:53:17 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:57.854 12:53:17 -- common/autotest_common.sh@10 -- # set +x 00:30:58.113 12:53:17 -- json_config/json_config.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:30:58.113 12:53:17 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:30:58.113 12:53:17 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:30:58.372 12:53:17 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:30:58.372 12:53:17 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:30:58.372 12:53:17 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:58.372 12:53:17 -- common/autotest_common.sh@10 -- # set +x 00:30:58.372 12:53:17 -- json_config/json_config.sh@48 -- # local ret=0 00:30:58.372 12:53:17 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:30:58.372 12:53:17 -- json_config/json_config.sh@49 -- # local enabled_types 00:30:58.372 12:53:17 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:30:58.372 12:53:17 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:30:58.372 12:53:17 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:30:58.940 12:53:18 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:30:58.940 12:53:18 -- json_config/json_config.sh@51 -- # local get_types 00:30:58.940 12:53:18 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:30:58.940 12:53:18 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:30:58.940 12:53:18 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:58.940 12:53:18 -- common/autotest_common.sh@10 -- # set +x 00:30:58.940 12:53:18 -- json_config/json_config.sh@58 -- # return 0 00:30:58.940 12:53:18 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:30:58.940 12:53:18 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:30:58.940 12:53:18 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:30:58.940 12:53:18 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:30:58.940 12:53:18 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:30:58.940 12:53:18 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:30:58.940 12:53:18 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:58.940 12:53:18 -- common/autotest_common.sh@10 -- # set +x 00:30:58.940 12:53:18 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:30:58.940 12:53:18 -- json_config/json_config.sh@286 -- # [[ tcp == \r\d\m\a ]] 00:30:58.940 12:53:18 -- json_config/json_config.sh@290 -- # [[ -z 127.0.0.1 ]] 00:30:58.940 12:53:18 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:30:58.940 12:53:18 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:30:59.199 MallocForNvmf0 00:30:59.199 12:53:18 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:30:59.199 12:53:18 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:30:59.458 MallocForNvmf1 00:30:59.458 12:53:18 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:30:59.458 12:53:18 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:30:59.458 [2024-07-22 12:53:18.869171] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:59.718 12:53:18 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:59.718 12:53:18 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:59.977 12:53:19 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:30:59.977 12:53:19 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:31:00.236 12:53:19 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:31:00.236 12:53:19 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:31:00.495 12:53:19 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:31:00.495 12:53:19 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:31:00.755 [2024-07-22 12:53:19.954010] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:31:00.755 12:53:19 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:31:00.755 12:53:19 -- common/autotest_common.sh@718 -- # xtrace_disable 00:31:00.755 12:53:19 -- common/autotest_common.sh@10 -- # set +x 00:31:00.755 12:53:20 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:31:00.755 12:53:20 -- common/autotest_common.sh@718 -- # xtrace_disable 00:31:00.755 12:53:20 -- common/autotest_common.sh@10 -- # set +x 00:31:00.755 12:53:20 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:31:00.755 12:53:20 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:31:00.755 12:53:20 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:31:01.014 MallocBdevForConfigChangeCheck 00:31:01.014 12:53:20 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:31:01.014 12:53:20 -- common/autotest_common.sh@718 -- # xtrace_disable 00:31:01.014 12:53:20 -- common/autotest_common.sh@10 -- # set +x 00:31:01.014 12:53:20 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:31:01.014 12:53:20 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:31:01.581 INFO: shutting down applications... 00:31:01.581 12:53:20 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:31:01.581 12:53:20 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:31:01.581 12:53:20 -- json_config/json_config.sh@431 -- # json_config_clear target 00:31:01.581 12:53:20 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:31:01.581 12:53:20 -- json_config/json_config.sh@386 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:31:01.840 Calling clear_iscsi_subsystem 00:31:01.840 Calling clear_nvmf_subsystem 00:31:01.840 Calling clear_nbd_subsystem 00:31:01.840 Calling clear_ublk_subsystem 00:31:01.840 Calling clear_vhost_blk_subsystem 00:31:01.840 Calling clear_vhost_scsi_subsystem 00:31:01.840 Calling clear_scheduler_subsystem 00:31:01.840 Calling clear_bdev_subsystem 00:31:01.840 Calling clear_accel_subsystem 00:31:01.840 Calling clear_vmd_subsystem 00:31:01.840 Calling clear_sock_subsystem 00:31:01.840 Calling clear_iobuf_subsystem 00:31:01.840 12:53:21 -- json_config/json_config.sh@390 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:31:01.840 12:53:21 -- json_config/json_config.sh@396 -- # count=100 00:31:01.840 12:53:21 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:31:01.840 12:53:21 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:31:01.840 12:53:21 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:31:01.840 12:53:21 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:31:02.408 12:53:21 -- json_config/json_config.sh@398 -- # break 00:31:02.408 12:53:21 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:31:02.408 12:53:21 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:31:02.408 12:53:21 -- json_config/json_config.sh@120 -- # local app=target 00:31:02.408 12:53:21 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:31:02.408 12:53:21 -- json_config/json_config.sh@124 -- # [[ -n 67634 ]] 00:31:02.408 12:53:21 -- json_config/json_config.sh@127 -- # kill -SIGINT 67634 00:31:02.408 12:53:21 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:31:02.408 12:53:21 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:31:02.408 12:53:21 -- json_config/json_config.sh@130 -- # kill -0 67634 00:31:02.408 12:53:21 -- json_config/json_config.sh@134 -- # sleep 0.5 00:31:02.666 12:53:22 -- json_config/json_config.sh@129 -- # (( i++ )) 00:31:02.666 12:53:22 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:31:02.666 12:53:22 -- json_config/json_config.sh@130 -- # kill -0 67634 00:31:02.666 12:53:22 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:31:02.666 12:53:22 -- json_config/json_config.sh@132 -- # break 00:31:02.666 12:53:22 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:31:02.666 SPDK target shutdown done 00:31:02.666 12:53:22 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:31:02.666 INFO: relaunching applications... 00:31:02.666 12:53:22 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:31:02.666 12:53:22 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:31:02.666 12:53:22 -- json_config/json_config.sh@98 -- # local app=target 00:31:02.666 12:53:22 -- json_config/json_config.sh@99 -- # shift 00:31:02.666 12:53:22 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:31:02.666 12:53:22 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:31:02.666 12:53:22 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:31:02.667 12:53:22 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:31:02.667 12:53:22 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:31:02.667 12:53:22 -- json_config/json_config.sh@111 -- # app_pid[$app]=67914 00:31:02.667 Waiting for target to run... 00:31:02.667 12:53:22 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:31:02.667 12:53:22 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:31:02.667 12:53:22 -- json_config/json_config.sh@114 -- # waitforlisten 67914 /var/tmp/spdk_tgt.sock 00:31:02.667 12:53:22 -- common/autotest_common.sh@819 -- # '[' -z 67914 ']' 00:31:02.667 12:53:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:31:02.667 12:53:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:02.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:31:02.667 12:53:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:31:02.667 12:53:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:02.667 12:53:22 -- common/autotest_common.sh@10 -- # set +x 00:31:02.925 [2024-07-22 12:53:22.101963] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:31:02.926 [2024-07-22 12:53:22.102062] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67914 ] 00:31:03.184 [2024-07-22 12:53:22.543500] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:03.443 [2024-07-22 12:53:22.609802] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:03.443 [2024-07-22 12:53:22.609987] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:03.703 [2024-07-22 12:53:22.913441] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:03.703 [2024-07-22 12:53:22.945619] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:31:03.703 12:53:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:03.703 12:53:23 -- common/autotest_common.sh@852 -- # return 0 00:31:03.703 00:31:03.703 12:53:23 -- json_config/json_config.sh@115 -- # echo '' 00:31:03.703 12:53:23 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:31:03.703 INFO: Checking if target configuration is the same... 00:31:03.703 12:53:23 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:31:03.703 12:53:23 -- json_config/json_config.sh@441 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:31:03.703 12:53:23 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:31:03.703 12:53:23 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:31:03.703 + '[' 2 -ne 2 ']' 00:31:03.703 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:31:03.703 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:31:03.703 + rootdir=/home/vagrant/spdk_repo/spdk 00:31:03.703 +++ basename /dev/fd/62 00:31:03.703 ++ mktemp /tmp/62.XXX 00:31:03.703 + tmp_file_1=/tmp/62.jEh 00:31:03.703 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:31:03.703 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:31:03.703 + tmp_file_2=/tmp/spdk_tgt_config.json.r1D 00:31:03.703 + ret=0 00:31:03.703 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:31:04.271 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:31:04.271 + diff -u /tmp/62.jEh /tmp/spdk_tgt_config.json.r1D 00:31:04.271 INFO: JSON config files are the same 00:31:04.271 + echo 'INFO: JSON config files are the same' 00:31:04.271 + rm /tmp/62.jEh /tmp/spdk_tgt_config.json.r1D 00:31:04.271 + exit 0 00:31:04.271 12:53:23 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:31:04.271 INFO: changing configuration and checking if this can be detected... 00:31:04.271 12:53:23 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:31:04.271 12:53:23 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:31:04.271 12:53:23 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:31:04.530 12:53:23 -- json_config/json_config.sh@450 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:31:04.530 12:53:23 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:31:04.530 12:53:23 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:31:04.530 + '[' 2 -ne 2 ']' 00:31:04.530 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:31:04.530 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:31:04.530 + rootdir=/home/vagrant/spdk_repo/spdk 00:31:04.530 +++ basename /dev/fd/62 00:31:04.530 ++ mktemp /tmp/62.XXX 00:31:04.530 + tmp_file_1=/tmp/62.vWq 00:31:04.530 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:31:04.530 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:31:04.530 + tmp_file_2=/tmp/spdk_tgt_config.json.Vr5 00:31:04.530 + ret=0 00:31:04.530 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:31:05.098 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:31:05.098 + diff -u /tmp/62.vWq /tmp/spdk_tgt_config.json.Vr5 00:31:05.098 + ret=1 00:31:05.098 + echo '=== Start of file: /tmp/62.vWq ===' 00:31:05.098 + cat /tmp/62.vWq 00:31:05.098 + echo '=== End of file: /tmp/62.vWq ===' 00:31:05.098 + echo '' 00:31:05.098 + echo '=== Start of file: /tmp/spdk_tgt_config.json.Vr5 ===' 00:31:05.098 + cat /tmp/spdk_tgt_config.json.Vr5 00:31:05.098 + echo '=== End of file: /tmp/spdk_tgt_config.json.Vr5 ===' 00:31:05.098 + echo '' 00:31:05.098 + rm /tmp/62.vWq /tmp/spdk_tgt_config.json.Vr5 00:31:05.098 + exit 1 00:31:05.098 INFO: configuration change detected. 00:31:05.098 12:53:24 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:31:05.098 12:53:24 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:31:05.098 12:53:24 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:31:05.098 12:53:24 -- common/autotest_common.sh@712 -- # xtrace_disable 00:31:05.098 12:53:24 -- common/autotest_common.sh@10 -- # set +x 00:31:05.098 12:53:24 -- json_config/json_config.sh@360 -- # local ret=0 00:31:05.098 12:53:24 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:31:05.098 12:53:24 -- json_config/json_config.sh@370 -- # [[ -n 67914 ]] 00:31:05.098 12:53:24 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:31:05.098 12:53:24 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:31:05.098 12:53:24 -- common/autotest_common.sh@712 -- # xtrace_disable 00:31:05.098 12:53:24 -- common/autotest_common.sh@10 -- # set +x 00:31:05.098 12:53:24 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:31:05.098 12:53:24 -- json_config/json_config.sh@246 -- # uname -s 00:31:05.098 12:53:24 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:31:05.098 12:53:24 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:31:05.098 12:53:24 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:31:05.098 12:53:24 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:31:05.098 12:53:24 -- common/autotest_common.sh@718 -- # xtrace_disable 00:31:05.098 12:53:24 -- common/autotest_common.sh@10 -- # set +x 00:31:05.098 12:53:24 -- json_config/json_config.sh@376 -- # killprocess 67914 00:31:05.098 12:53:24 -- common/autotest_common.sh@926 -- # '[' -z 67914 ']' 00:31:05.098 12:53:24 -- common/autotest_common.sh@930 -- # kill -0 67914 00:31:05.098 12:53:24 -- common/autotest_common.sh@931 -- # uname 00:31:05.098 12:53:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:05.098 12:53:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 67914 00:31:05.098 12:53:24 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:31:05.098 12:53:24 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:31:05.098 killing process with pid 67914 00:31:05.098 12:53:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 67914' 00:31:05.098 12:53:24 -- common/autotest_common.sh@945 -- # kill 67914 00:31:05.098 12:53:24 -- common/autotest_common.sh@950 -- # wait 67914 00:31:05.357 12:53:24 -- json_config/json_config.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:31:05.357 12:53:24 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:31:05.357 12:53:24 -- common/autotest_common.sh@718 -- # xtrace_disable 00:31:05.357 12:53:24 -- common/autotest_common.sh@10 -- # set +x 00:31:05.358 12:53:24 -- json_config/json_config.sh@381 -- # return 0 00:31:05.358 INFO: Success 00:31:05.358 12:53:24 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:31:05.358 00:31:05.358 real 0m8.652s 00:31:05.358 user 0m12.328s 00:31:05.358 sys 0m2.048s 00:31:05.358 12:53:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:05.358 12:53:24 -- common/autotest_common.sh@10 -- # set +x 00:31:05.358 ************************************ 00:31:05.358 END TEST json_config 00:31:05.358 ************************************ 00:31:05.358 12:53:24 -- spdk/autotest.sh@179 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:31:05.358 12:53:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:05.358 12:53:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:05.358 12:53:24 -- common/autotest_common.sh@10 -- # set +x 00:31:05.358 ************************************ 00:31:05.358 START TEST json_config_extra_key 00:31:05.358 ************************************ 00:31:05.358 12:53:24 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:31:05.617 12:53:24 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:31:05.617 12:53:24 -- nvmf/common.sh@7 -- # uname -s 00:31:05.617 12:53:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:05.617 12:53:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:05.617 12:53:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:05.617 12:53:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:05.617 12:53:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:05.617 12:53:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:05.617 12:53:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:05.617 12:53:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:05.617 12:53:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:05.617 12:53:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:05.617 12:53:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:864ae4a3-cd96-439b-8ef2-6dab4d992115 00:31:05.617 12:53:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=864ae4a3-cd96-439b-8ef2-6dab4d992115 00:31:05.617 12:53:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:05.617 12:53:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:05.617 12:53:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:31:05.617 12:53:24 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:05.617 12:53:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:05.617 12:53:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:05.617 12:53:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:05.617 12:53:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.617 12:53:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.617 12:53:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.617 12:53:24 -- paths/export.sh@5 -- # export PATH 00:31:05.617 12:53:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.617 12:53:24 -- nvmf/common.sh@46 -- # : 0 00:31:05.617 12:53:24 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:31:05.617 12:53:24 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:31:05.617 12:53:24 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:31:05.617 12:53:24 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:05.617 12:53:24 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:05.617 12:53:24 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:31:05.617 12:53:24 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:31:05.617 12:53:24 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:31:05.617 12:53:24 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:31:05.617 12:53:24 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:31:05.617 12:53:24 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:31:05.618 12:53:24 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:31:05.618 12:53:24 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:31:05.618 12:53:24 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:31:05.618 12:53:24 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:31:05.618 12:53:24 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:31:05.618 12:53:24 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:31:05.618 INFO: launching applications... 00:31:05.618 12:53:24 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:31:05.618 12:53:24 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:31:05.618 12:53:24 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:31:05.618 12:53:24 -- json_config/json_config_extra_key.sh@25 -- # shift 00:31:05.618 12:53:24 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:31:05.618 12:53:24 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:31:05.618 12:53:24 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=68089 00:31:05.618 Waiting for target to run... 00:31:05.618 12:53:24 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:31:05.618 12:53:24 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 68089 /var/tmp/spdk_tgt.sock 00:31:05.618 12:53:24 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:31:05.618 12:53:24 -- common/autotest_common.sh@819 -- # '[' -z 68089 ']' 00:31:05.618 12:53:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:31:05.618 12:53:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:05.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:31:05.618 12:53:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:31:05.618 12:53:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:05.618 12:53:24 -- common/autotest_common.sh@10 -- # set +x 00:31:05.618 [2024-07-22 12:53:24.885555] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:31:05.618 [2024-07-22 12:53:24.886125] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68089 ] 00:31:06.185 [2024-07-22 12:53:25.317141] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:06.185 [2024-07-22 12:53:25.385582] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:06.185 [2024-07-22 12:53:25.385801] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:06.753 12:53:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:06.753 12:53:25 -- common/autotest_common.sh@852 -- # return 0 00:31:06.753 00:31:06.753 12:53:25 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:31:06.753 INFO: shutting down applications... 00:31:06.753 12:53:25 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:31:06.753 12:53:25 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:31:06.753 12:53:25 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:31:06.753 12:53:25 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:31:06.753 12:53:25 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 68089 ]] 00:31:06.753 12:53:25 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 68089 00:31:06.753 12:53:25 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:31:06.753 12:53:25 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:31:06.753 12:53:25 -- json_config/json_config_extra_key.sh@50 -- # kill -0 68089 00:31:06.753 12:53:25 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:31:07.011 12:53:26 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:31:07.011 12:53:26 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:31:07.011 12:53:26 -- json_config/json_config_extra_key.sh@50 -- # kill -0 68089 00:31:07.011 12:53:26 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:31:07.011 12:53:26 -- json_config/json_config_extra_key.sh@52 -- # break 00:31:07.011 SPDK target shutdown done 00:31:07.011 12:53:26 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:31:07.011 12:53:26 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:31:07.011 Success 00:31:07.011 12:53:26 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:31:07.011 00:31:07.011 real 0m1.663s 00:31:07.011 user 0m1.601s 00:31:07.011 sys 0m0.459s 00:31:07.011 12:53:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:07.011 12:53:26 -- common/autotest_common.sh@10 -- # set +x 00:31:07.011 ************************************ 00:31:07.011 END TEST json_config_extra_key 00:31:07.011 ************************************ 00:31:07.270 12:53:26 -- spdk/autotest.sh@180 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:31:07.270 12:53:26 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:07.270 12:53:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:07.270 12:53:26 -- common/autotest_common.sh@10 -- # set +x 00:31:07.270 ************************************ 00:31:07.270 START TEST alias_rpc 00:31:07.270 ************************************ 00:31:07.270 12:53:26 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:31:07.270 * Looking for test storage... 00:31:07.270 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:31:07.270 12:53:26 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:31:07.270 12:53:26 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=68169 00:31:07.271 12:53:26 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 68169 00:31:07.271 12:53:26 -- common/autotest_common.sh@819 -- # '[' -z 68169 ']' 00:31:07.271 12:53:26 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:07.271 12:53:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:07.271 12:53:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:07.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:07.271 12:53:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:07.271 12:53:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:07.271 12:53:26 -- common/autotest_common.sh@10 -- # set +x 00:31:07.271 [2024-07-22 12:53:26.585562] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:31:07.271 [2024-07-22 12:53:26.586063] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68169 ] 00:31:07.529 [2024-07-22 12:53:26.719677] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:07.529 [2024-07-22 12:53:26.809301] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:07.529 [2024-07-22 12:53:26.809500] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:08.464 12:53:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:08.464 12:53:27 -- common/autotest_common.sh@852 -- # return 0 00:31:08.464 12:53:27 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:31:08.464 12:53:27 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 68169 00:31:08.464 12:53:27 -- common/autotest_common.sh@926 -- # '[' -z 68169 ']' 00:31:08.464 12:53:27 -- common/autotest_common.sh@930 -- # kill -0 68169 00:31:08.464 12:53:27 -- common/autotest_common.sh@931 -- # uname 00:31:08.464 12:53:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:08.464 12:53:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 68169 00:31:08.723 12:53:27 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:31:08.723 12:53:27 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:31:08.723 killing process with pid 68169 00:31:08.723 12:53:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 68169' 00:31:08.723 12:53:27 -- common/autotest_common.sh@945 -- # kill 68169 00:31:08.723 12:53:27 -- common/autotest_common.sh@950 -- # wait 68169 00:31:08.982 00:31:08.982 real 0m1.806s 00:31:08.982 user 0m2.079s 00:31:08.982 sys 0m0.445s 00:31:08.982 12:53:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:08.982 12:53:28 -- common/autotest_common.sh@10 -- # set +x 00:31:08.982 ************************************ 00:31:08.982 END TEST alias_rpc 00:31:08.982 ************************************ 00:31:08.982 12:53:28 -- spdk/autotest.sh@182 -- # [[ 1 -eq 0 ]] 00:31:08.982 12:53:28 -- spdk/autotest.sh@186 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:31:08.982 12:53:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:08.982 12:53:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:08.982 12:53:28 -- common/autotest_common.sh@10 -- # set +x 00:31:08.982 ************************************ 00:31:08.982 START TEST dpdk_mem_utility 00:31:08.982 ************************************ 00:31:08.982 12:53:28 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:31:08.982 * Looking for test storage... 00:31:08.982 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:31:08.982 12:53:28 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:31:08.982 12:53:28 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=68250 00:31:08.982 12:53:28 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:08.982 12:53:28 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 68250 00:31:08.982 12:53:28 -- common/autotest_common.sh@819 -- # '[' -z 68250 ']' 00:31:08.982 12:53:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:08.982 12:53:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:08.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:08.982 12:53:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:08.982 12:53:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:08.982 12:53:28 -- common/autotest_common.sh@10 -- # set +x 00:31:09.241 [2024-07-22 12:53:28.445183] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:31:09.241 [2024-07-22 12:53:28.445719] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68250 ] 00:31:09.241 [2024-07-22 12:53:28.576028] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:09.500 [2024-07-22 12:53:28.672969] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:09.500 [2024-07-22 12:53:28.673125] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:10.438 12:53:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:10.439 12:53:29 -- common/autotest_common.sh@852 -- # return 0 00:31:10.439 12:53:29 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:31:10.439 12:53:29 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:31:10.439 12:53:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:10.439 12:53:29 -- common/autotest_common.sh@10 -- # set +x 00:31:10.439 { 00:31:10.439 "filename": "/tmp/spdk_mem_dump.txt" 00:31:10.439 } 00:31:10.439 12:53:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:10.439 12:53:29 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:31:10.439 DPDK memory size 814.000000 MiB in 1 heap(s) 00:31:10.439 1 heaps totaling size 814.000000 MiB 00:31:10.439 size: 814.000000 MiB heap id: 0 00:31:10.439 end heaps---------- 00:31:10.439 8 mempools totaling size 598.116089 MiB 00:31:10.439 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:31:10.439 size: 158.602051 MiB name: PDU_data_out_Pool 00:31:10.439 size: 84.521057 MiB name: bdev_io_68250 00:31:10.439 size: 51.011292 MiB name: evtpool_68250 00:31:10.439 size: 50.003479 MiB name: msgpool_68250 00:31:10.439 size: 21.763794 MiB name: PDU_Pool 00:31:10.439 size: 19.513306 MiB name: SCSI_TASK_Pool 00:31:10.439 size: 0.026123 MiB name: Session_Pool 00:31:10.439 end mempools------- 00:31:10.439 6 memzones totaling size 4.142822 MiB 00:31:10.439 size: 1.000366 MiB name: RG_ring_0_68250 00:31:10.439 size: 1.000366 MiB name: RG_ring_1_68250 00:31:10.439 size: 1.000366 MiB name: RG_ring_4_68250 00:31:10.439 size: 1.000366 MiB name: RG_ring_5_68250 00:31:10.439 size: 0.125366 MiB name: RG_ring_2_68250 00:31:10.439 size: 0.015991 MiB name: RG_ring_3_68250 00:31:10.439 end memzones------- 00:31:10.439 12:53:29 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:31:10.439 heap id: 0 total size: 814.000000 MiB number of busy elements: 213 number of free elements: 15 00:31:10.439 list of free elements. size: 12.487854 MiB 00:31:10.439 element at address: 0x200000400000 with size: 1.999512 MiB 00:31:10.439 element at address: 0x200018e00000 with size: 0.999878 MiB 00:31:10.439 element at address: 0x200019000000 with size: 0.999878 MiB 00:31:10.439 element at address: 0x200003e00000 with size: 0.996277 MiB 00:31:10.439 element at address: 0x200031c00000 with size: 0.994446 MiB 00:31:10.439 element at address: 0x200013800000 with size: 0.978699 MiB 00:31:10.439 element at address: 0x200007000000 with size: 0.959839 MiB 00:31:10.439 element at address: 0x200019200000 with size: 0.936584 MiB 00:31:10.439 element at address: 0x200000200000 with size: 0.837219 MiB 00:31:10.439 element at address: 0x20001aa00000 with size: 0.572632 MiB 00:31:10.439 element at address: 0x20000b200000 with size: 0.489990 MiB 00:31:10.439 element at address: 0x200000800000 with size: 0.487061 MiB 00:31:10.439 element at address: 0x200019400000 with size: 0.485657 MiB 00:31:10.439 element at address: 0x200027e00000 with size: 0.398499 MiB 00:31:10.439 element at address: 0x200003a00000 with size: 0.351685 MiB 00:31:10.439 list of standard malloc elements. size: 199.249573 MiB 00:31:10.439 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:31:10.439 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:31:10.439 element at address: 0x200018efff80 with size: 1.000122 MiB 00:31:10.439 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:31:10.439 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:31:10.439 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:31:10.439 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:31:10.439 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:31:10.439 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:31:10.439 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:31:10.439 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:31:10.439 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:31:10.439 element at address: 0x2000002d6780 with size: 0.000183 MiB 00:31:10.439 element at address: 0x2000002d6840 with size: 0.000183 MiB 00:31:10.439 element at address: 0x2000002d6900 with size: 0.000183 MiB 00:31:10.439 element at address: 0x2000002d69c0 with size: 0.000183 MiB 00:31:10.439 element at address: 0x2000002d6a80 with size: 0.000183 MiB 00:31:10.439 element at address: 0x2000002d6b40 with size: 0.000183 MiB 00:31:10.439 element at address: 0x2000002d6c00 with size: 0.000183 MiB 00:31:10.439 element at address: 0x2000002d6cc0 with size: 0.000183 MiB 00:31:10.439 element at address: 0x2000002d6d80 with size: 0.000183 MiB 00:31:10.439 element at address: 0x2000002d6e40 with size: 0.000183 MiB 00:31:10.439 element at address: 0x2000002d6f00 with size: 0.000183 MiB 00:31:10.439 element at address: 0x2000002d6fc0 with size: 0.000183 MiB 00:31:10.439 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:31:10.439 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:31:10.439 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:31:10.439 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:31:10.439 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:31:10.439 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:31:10.439 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:31:10.439 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:31:10.439 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:31:10.439 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:31:10.439 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:31:10.439 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:31:10.439 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:31:10.439 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:31:10.439 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:31:10.439 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:31:10.439 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:31:10.439 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:31:10.439 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:31:10.439 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:31:10.439 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:31:10.439 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:31:10.439 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:31:10.439 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:31:10.439 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:31:10.439 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:31:10.439 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:31:10.439 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:31:10.439 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:31:10.439 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:31:10.439 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:31:10.439 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:31:10.439 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:31:10.439 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:31:10.439 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:31:10.439 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:31:10.439 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:31:10.439 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:31:10.439 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:31:10.439 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:31:10.439 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:31:10.439 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:31:10.439 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:31:10.439 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:31:10.439 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:31:10.439 element at address: 0x200003adb300 with size: 0.000183 MiB 00:31:10.439 element at address: 0x200003adb500 with size: 0.000183 MiB 00:31:10.439 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:31:10.439 element at address: 0x200003affa80 with size: 0.000183 MiB 00:31:10.439 element at address: 0x200003affb40 with size: 0.000183 MiB 00:31:10.439 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:31:10.439 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:31:10.439 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:31:10.439 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:31:10.439 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:31:10.440 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:31:10.440 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:31:10.440 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:31:10.440 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:31:10.440 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:31:10.440 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:31:10.440 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:31:10.440 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:31:10.440 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:31:10.440 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:31:10.440 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:31:10.440 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:31:10.440 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:31:10.440 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:31:10.440 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:31:10.440 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:31:10.440 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:31:10.440 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:31:10.440 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:31:10.440 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:31:10.440 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:31:10.440 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:31:10.440 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:31:10.440 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:31:10.440 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:31:10.440 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:31:10.440 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:31:10.440 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:31:10.440 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:31:10.440 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:31:10.440 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:31:10.440 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:31:10.440 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:31:10.440 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:31:10.440 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:31:10.440 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:31:10.440 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:31:10.440 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:31:10.440 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:31:10.440 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:31:10.440 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:31:10.440 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:31:10.440 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:31:10.440 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:31:10.440 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:31:10.440 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:31:10.440 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:31:10.440 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:31:10.440 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:31:10.440 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:31:10.440 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:31:10.440 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:31:10.440 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:31:10.440 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:31:10.440 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:31:10.440 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:31:10.440 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:31:10.440 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:31:10.440 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:31:10.440 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:31:10.440 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:31:10.440 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:31:10.440 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:31:10.440 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:31:10.440 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:31:10.440 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:31:10.440 element at address: 0x200027e66040 with size: 0.000183 MiB 00:31:10.440 element at address: 0x200027e66100 with size: 0.000183 MiB 00:31:10.440 element at address: 0x200027e6cd00 with size: 0.000183 MiB 00:31:10.440 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:31:10.440 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:31:10.440 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:31:10.440 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:31:10.440 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:31:10.440 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:31:10.440 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:31:10.440 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:31:10.440 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:31:10.440 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:31:10.440 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:31:10.440 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:31:10.440 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:31:10.440 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:31:10.440 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:31:10.440 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:31:10.440 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:31:10.440 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:31:10.440 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:31:10.440 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:31:10.440 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:31:10.440 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:31:10.440 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:31:10.440 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:31:10.440 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:31:10.440 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:31:10.440 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:31:10.440 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:31:10.440 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:31:10.440 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:31:10.440 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:31:10.440 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:31:10.440 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:31:10.440 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:31:10.440 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:31:10.440 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:31:10.440 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:31:10.440 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:31:10.440 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:31:10.440 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:31:10.440 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:31:10.440 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:31:10.440 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:31:10.440 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:31:10.440 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:31:10.440 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:31:10.440 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:31:10.440 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:31:10.440 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:31:10.440 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:31:10.440 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:31:10.440 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:31:10.440 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:31:10.440 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:31:10.440 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:31:10.441 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:31:10.441 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:31:10.441 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:31:10.441 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:31:10.441 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:31:10.441 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:31:10.441 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:31:10.441 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:31:10.441 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:31:10.441 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:31:10.441 list of memzone associated elements. size: 602.262573 MiB 00:31:10.441 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:31:10.441 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:31:10.441 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:31:10.441 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:31:10.441 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:31:10.441 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_68250_0 00:31:10.441 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:31:10.441 associated memzone info: size: 48.002930 MiB name: MP_evtpool_68250_0 00:31:10.441 element at address: 0x200003fff380 with size: 48.003052 MiB 00:31:10.441 associated memzone info: size: 48.002930 MiB name: MP_msgpool_68250_0 00:31:10.441 element at address: 0x2000195be940 with size: 20.255554 MiB 00:31:10.441 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:31:10.441 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:31:10.441 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:31:10.441 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:31:10.441 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_68250 00:31:10.441 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:31:10.441 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_68250 00:31:10.441 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:31:10.441 associated memzone info: size: 1.007996 MiB name: MP_evtpool_68250 00:31:10.441 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:31:10.441 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:31:10.441 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:31:10.441 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:31:10.441 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:31:10.441 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:31:10.441 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:31:10.441 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:31:10.441 element at address: 0x200003eff180 with size: 1.000488 MiB 00:31:10.441 associated memzone info: size: 1.000366 MiB name: RG_ring_0_68250 00:31:10.441 element at address: 0x200003affc00 with size: 1.000488 MiB 00:31:10.441 associated memzone info: size: 1.000366 MiB name: RG_ring_1_68250 00:31:10.441 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:31:10.441 associated memzone info: size: 1.000366 MiB name: RG_ring_4_68250 00:31:10.441 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:31:10.441 associated memzone info: size: 1.000366 MiB name: RG_ring_5_68250 00:31:10.441 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:31:10.441 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_68250 00:31:10.441 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:31:10.441 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:31:10.441 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:31:10.441 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:31:10.441 element at address: 0x20001947c540 with size: 0.250488 MiB 00:31:10.441 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:31:10.441 element at address: 0x200003adf880 with size: 0.125488 MiB 00:31:10.441 associated memzone info: size: 0.125366 MiB name: RG_ring_2_68250 00:31:10.441 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:31:10.441 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:31:10.441 element at address: 0x200027e661c0 with size: 0.023743 MiB 00:31:10.441 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:31:10.441 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:31:10.441 associated memzone info: size: 0.015991 MiB name: RG_ring_3_68250 00:31:10.441 element at address: 0x200027e6c300 with size: 0.002441 MiB 00:31:10.441 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:31:10.441 element at address: 0x2000002d7080 with size: 0.000305 MiB 00:31:10.441 associated memzone info: size: 0.000183 MiB name: MP_msgpool_68250 00:31:10.441 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:31:10.441 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_68250 00:31:10.441 element at address: 0x200027e6cdc0 with size: 0.000305 MiB 00:31:10.441 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:31:10.441 12:53:29 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:31:10.441 12:53:29 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 68250 00:31:10.441 12:53:29 -- common/autotest_common.sh@926 -- # '[' -z 68250 ']' 00:31:10.441 12:53:29 -- common/autotest_common.sh@930 -- # kill -0 68250 00:31:10.441 12:53:29 -- common/autotest_common.sh@931 -- # uname 00:31:10.441 12:53:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:10.441 12:53:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 68250 00:31:10.441 12:53:29 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:31:10.441 12:53:29 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:31:10.441 killing process with pid 68250 00:31:10.441 12:53:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 68250' 00:31:10.441 12:53:29 -- common/autotest_common.sh@945 -- # kill 68250 00:31:10.441 12:53:29 -- common/autotest_common.sh@950 -- # wait 68250 00:31:10.700 00:31:10.700 real 0m1.723s 00:31:10.700 user 0m1.931s 00:31:10.700 sys 0m0.428s 00:31:10.700 12:53:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:10.700 12:53:30 -- common/autotest_common.sh@10 -- # set +x 00:31:10.700 ************************************ 00:31:10.700 END TEST dpdk_mem_utility 00:31:10.700 ************************************ 00:31:10.700 12:53:30 -- spdk/autotest.sh@187 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:31:10.700 12:53:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:10.700 12:53:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:10.700 12:53:30 -- common/autotest_common.sh@10 -- # set +x 00:31:10.700 ************************************ 00:31:10.700 START TEST event 00:31:10.700 ************************************ 00:31:10.700 12:53:30 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:31:10.959 * Looking for test storage... 00:31:10.959 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:31:10.959 12:53:30 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:31:10.959 12:53:30 -- bdev/nbd_common.sh@6 -- # set -e 00:31:10.959 12:53:30 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:31:10.959 12:53:30 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:31:10.959 12:53:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:10.959 12:53:30 -- common/autotest_common.sh@10 -- # set +x 00:31:10.959 ************************************ 00:31:10.959 START TEST event_perf 00:31:10.959 ************************************ 00:31:10.959 12:53:30 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:31:10.959 Running I/O for 1 seconds...[2024-07-22 12:53:30.188245] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:31:10.959 [2024-07-22 12:53:30.188343] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68344 ] 00:31:10.959 [2024-07-22 12:53:30.323205] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:11.218 [2024-07-22 12:53:30.431981] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:11.218 [2024-07-22 12:53:30.432067] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:11.218 [2024-07-22 12:53:30.432190] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:31:11.218 [2024-07-22 12:53:30.432196] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:12.155 Running I/O for 1 seconds... 00:31:12.155 lcore 0: 195824 00:31:12.155 lcore 1: 195823 00:31:12.155 lcore 2: 195825 00:31:12.155 lcore 3: 195822 00:31:12.155 done. 00:31:12.155 00:31:12.155 real 0m1.330s 00:31:12.155 user 0m4.141s 00:31:12.155 sys 0m0.071s 00:31:12.155 12:53:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:12.155 ************************************ 00:31:12.155 END TEST event_perf 00:31:12.155 ************************************ 00:31:12.155 12:53:31 -- common/autotest_common.sh@10 -- # set +x 00:31:12.155 12:53:31 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:31:12.155 12:53:31 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:31:12.155 12:53:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:12.155 12:53:31 -- common/autotest_common.sh@10 -- # set +x 00:31:12.155 ************************************ 00:31:12.155 START TEST event_reactor 00:31:12.155 ************************************ 00:31:12.155 12:53:31 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:31:12.155 [2024-07-22 12:53:31.568583] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:31:12.155 [2024-07-22 12:53:31.568669] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68383 ] 00:31:12.414 [2024-07-22 12:53:31.704912] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:12.414 [2024-07-22 12:53:31.791111] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:13.953 test_start 00:31:13.953 oneshot 00:31:13.953 tick 100 00:31:13.953 tick 100 00:31:13.953 tick 250 00:31:13.953 tick 100 00:31:13.953 tick 100 00:31:13.953 tick 250 00:31:13.953 tick 500 00:31:13.953 tick 100 00:31:13.953 tick 100 00:31:13.953 tick 100 00:31:13.953 tick 250 00:31:13.953 tick 100 00:31:13.953 tick 100 00:31:13.953 test_end 00:31:13.953 00:31:13.953 real 0m1.313s 00:31:13.953 user 0m1.152s 00:31:13.953 sys 0m0.055s 00:31:13.953 12:53:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:13.953 12:53:32 -- common/autotest_common.sh@10 -- # set +x 00:31:13.953 ************************************ 00:31:13.953 END TEST event_reactor 00:31:13.953 ************************************ 00:31:13.954 12:53:32 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:31:13.954 12:53:32 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:31:13.954 12:53:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:13.954 12:53:32 -- common/autotest_common.sh@10 -- # set +x 00:31:13.954 ************************************ 00:31:13.954 START TEST event_reactor_perf 00:31:13.954 ************************************ 00:31:13.954 12:53:32 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:31:13.954 [2024-07-22 12:53:32.931066] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:31:13.954 [2024-07-22 12:53:32.931173] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68418 ] 00:31:13.954 [2024-07-22 12:53:33.067844] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:13.954 [2024-07-22 12:53:33.163440] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:14.932 test_start 00:31:14.932 test_end 00:31:14.932 Performance: 363299 events per second 00:31:14.932 00:31:14.932 real 0m1.328s 00:31:14.932 user 0m1.163s 00:31:14.932 sys 0m0.059s 00:31:14.932 12:53:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:14.932 12:53:34 -- common/autotest_common.sh@10 -- # set +x 00:31:14.932 ************************************ 00:31:14.932 END TEST event_reactor_perf 00:31:14.932 ************************************ 00:31:14.932 12:53:34 -- event/event.sh@49 -- # uname -s 00:31:14.932 12:53:34 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:31:14.932 12:53:34 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:31:14.932 12:53:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:14.932 12:53:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:14.932 12:53:34 -- common/autotest_common.sh@10 -- # set +x 00:31:14.932 ************************************ 00:31:14.932 START TEST event_scheduler 00:31:14.932 ************************************ 00:31:14.932 12:53:34 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:31:15.191 * Looking for test storage... 00:31:15.191 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:31:15.191 12:53:34 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:31:15.191 12:53:34 -- scheduler/scheduler.sh@35 -- # scheduler_pid=68473 00:31:15.191 12:53:34 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:31:15.191 12:53:34 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:31:15.191 12:53:34 -- scheduler/scheduler.sh@37 -- # waitforlisten 68473 00:31:15.191 12:53:34 -- common/autotest_common.sh@819 -- # '[' -z 68473 ']' 00:31:15.191 12:53:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:15.191 12:53:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:15.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:15.191 12:53:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:15.191 12:53:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:15.191 12:53:34 -- common/autotest_common.sh@10 -- # set +x 00:31:15.191 [2024-07-22 12:53:34.422818] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:31:15.191 [2024-07-22 12:53:34.422922] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68473 ] 00:31:15.191 [2024-07-22 12:53:34.563894] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:15.450 [2024-07-22 12:53:34.670200] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:15.450 [2024-07-22 12:53:34.670339] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:15.450 [2024-07-22 12:53:34.670490] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:31:15.450 [2024-07-22 12:53:34.670714] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:16.017 12:53:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:16.017 12:53:35 -- common/autotest_common.sh@852 -- # return 0 00:31:16.017 12:53:35 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:31:16.017 12:53:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:16.017 12:53:35 -- common/autotest_common.sh@10 -- # set +x 00:31:16.017 POWER: Env isn't set yet! 00:31:16.017 POWER: Attempting to initialise ACPI cpufreq power management... 00:31:16.017 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:31:16.017 POWER: Cannot set governor of lcore 0 to userspace 00:31:16.017 POWER: Attempting to initialise PSTAT power management... 00:31:16.017 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:31:16.017 POWER: Cannot set governor of lcore 0 to performance 00:31:16.017 POWER: Attempting to initialise CPPC power management... 00:31:16.017 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:31:16.017 POWER: Cannot set governor of lcore 0 to userspace 00:31:16.017 POWER: Attempting to initialise VM power management... 00:31:16.017 GUEST_CHANNEL: Unable to to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:31:16.017 POWER: Unable to set Power Management Environment for lcore 0 00:31:16.017 [2024-07-22 12:53:35.423764] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:31:16.017 [2024-07-22 12:53:35.423779] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:31:16.017 [2024-07-22 12:53:35.423788] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:31:16.017 [2024-07-22 12:53:35.423800] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:31:16.017 [2024-07-22 12:53:35.423808] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:31:16.017 [2024-07-22 12:53:35.423815] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:31:16.017 12:53:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:16.017 12:53:35 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:31:16.017 12:53:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:16.017 12:53:35 -- common/autotest_common.sh@10 -- # set +x 00:31:16.276 [2024-07-22 12:53:35.523384] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:31:16.276 12:53:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:16.276 12:53:35 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:31:16.276 12:53:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:16.276 12:53:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:16.276 12:53:35 -- common/autotest_common.sh@10 -- # set +x 00:31:16.276 ************************************ 00:31:16.276 START TEST scheduler_create_thread 00:31:16.276 ************************************ 00:31:16.276 12:53:35 -- common/autotest_common.sh@1104 -- # scheduler_create_thread 00:31:16.276 12:53:35 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:31:16.276 12:53:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:16.276 12:53:35 -- common/autotest_common.sh@10 -- # set +x 00:31:16.276 2 00:31:16.276 12:53:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:16.276 12:53:35 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:31:16.276 12:53:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:16.276 12:53:35 -- common/autotest_common.sh@10 -- # set +x 00:31:16.276 3 00:31:16.276 12:53:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:16.276 12:53:35 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:31:16.276 12:53:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:16.276 12:53:35 -- common/autotest_common.sh@10 -- # set +x 00:31:16.276 4 00:31:16.276 12:53:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:16.276 12:53:35 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:31:16.276 12:53:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:16.276 12:53:35 -- common/autotest_common.sh@10 -- # set +x 00:31:16.276 5 00:31:16.276 12:53:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:16.276 12:53:35 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:31:16.276 12:53:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:16.276 12:53:35 -- common/autotest_common.sh@10 -- # set +x 00:31:16.276 6 00:31:16.276 12:53:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:16.276 12:53:35 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:31:16.276 12:53:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:16.276 12:53:35 -- common/autotest_common.sh@10 -- # set +x 00:31:16.276 7 00:31:16.276 12:53:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:16.277 12:53:35 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:31:16.277 12:53:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:16.277 12:53:35 -- common/autotest_common.sh@10 -- # set +x 00:31:16.277 8 00:31:16.277 12:53:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:16.277 12:53:35 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:31:16.277 12:53:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:16.277 12:53:35 -- common/autotest_common.sh@10 -- # set +x 00:31:16.277 9 00:31:16.277 12:53:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:16.277 12:53:35 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:31:16.277 12:53:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:16.277 12:53:35 -- common/autotest_common.sh@10 -- # set +x 00:31:16.277 10 00:31:16.277 12:53:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:16.277 12:53:35 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:31:16.277 12:53:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:16.277 12:53:35 -- common/autotest_common.sh@10 -- # set +x 00:31:16.277 12:53:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:16.277 12:53:35 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:31:16.277 12:53:35 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:31:16.277 12:53:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:16.277 12:53:35 -- common/autotest_common.sh@10 -- # set +x 00:31:16.277 12:53:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:16.277 12:53:35 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:31:16.277 12:53:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:16.277 12:53:35 -- common/autotest_common.sh@10 -- # set +x 00:31:16.277 12:53:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:16.277 12:53:35 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:31:16.277 12:53:35 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:31:16.277 12:53:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:16.277 12:53:35 -- common/autotest_common.sh@10 -- # set +x 00:31:17.655 12:53:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:17.656 00:31:17.656 real 0m1.172s 00:31:17.656 user 0m0.012s 00:31:17.656 sys 0m0.006s 00:31:17.656 12:53:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:17.656 12:53:36 -- common/autotest_common.sh@10 -- # set +x 00:31:17.656 ************************************ 00:31:17.656 END TEST scheduler_create_thread 00:31:17.656 ************************************ 00:31:17.656 12:53:36 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:31:17.656 12:53:36 -- scheduler/scheduler.sh@46 -- # killprocess 68473 00:31:17.656 12:53:36 -- common/autotest_common.sh@926 -- # '[' -z 68473 ']' 00:31:17.656 12:53:36 -- common/autotest_common.sh@930 -- # kill -0 68473 00:31:17.656 12:53:36 -- common/autotest_common.sh@931 -- # uname 00:31:17.656 12:53:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:17.656 12:53:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 68473 00:31:17.656 12:53:36 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:31:17.656 12:53:36 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:31:17.656 killing process with pid 68473 00:31:17.656 12:53:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 68473' 00:31:17.656 12:53:36 -- common/autotest_common.sh@945 -- # kill 68473 00:31:17.656 12:53:36 -- common/autotest_common.sh@950 -- # wait 68473 00:31:17.915 [2024-07-22 12:53:37.185735] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:31:18.174 00:31:18.174 real 0m3.112s 00:31:18.174 user 0m5.667s 00:31:18.174 sys 0m0.398s 00:31:18.174 12:53:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:18.174 12:53:37 -- common/autotest_common.sh@10 -- # set +x 00:31:18.174 ************************************ 00:31:18.174 END TEST event_scheduler 00:31:18.174 ************************************ 00:31:18.174 12:53:37 -- event/event.sh@51 -- # modprobe -n nbd 00:31:18.174 12:53:37 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:31:18.174 12:53:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:18.174 12:53:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:18.174 12:53:37 -- common/autotest_common.sh@10 -- # set +x 00:31:18.174 ************************************ 00:31:18.174 START TEST app_repeat 00:31:18.174 ************************************ 00:31:18.174 12:53:37 -- common/autotest_common.sh@1104 -- # app_repeat_test 00:31:18.174 12:53:37 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:18.174 12:53:37 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:18.174 12:53:37 -- event/event.sh@13 -- # local nbd_list 00:31:18.174 12:53:37 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:31:18.174 12:53:37 -- event/event.sh@14 -- # local bdev_list 00:31:18.174 12:53:37 -- event/event.sh@15 -- # local repeat_times=4 00:31:18.174 12:53:37 -- event/event.sh@17 -- # modprobe nbd 00:31:18.174 12:53:37 -- event/event.sh@19 -- # repeat_pid=68575 00:31:18.174 12:53:37 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:31:18.174 12:53:37 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:31:18.174 Process app_repeat pid: 68575 00:31:18.174 12:53:37 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 68575' 00:31:18.174 12:53:37 -- event/event.sh@23 -- # for i in {0..2} 00:31:18.174 spdk_app_start Round 0 00:31:18.174 12:53:37 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:31:18.174 12:53:37 -- event/event.sh@25 -- # waitforlisten 68575 /var/tmp/spdk-nbd.sock 00:31:18.174 12:53:37 -- common/autotest_common.sh@819 -- # '[' -z 68575 ']' 00:31:18.174 12:53:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:31:18.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:31:18.174 12:53:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:18.174 12:53:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:31:18.174 12:53:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:18.174 12:53:37 -- common/autotest_common.sh@10 -- # set +x 00:31:18.174 [2024-07-22 12:53:37.495775] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:31:18.174 [2024-07-22 12:53:37.496022] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68575 ] 00:31:18.433 [2024-07-22 12:53:37.635079] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:18.433 [2024-07-22 12:53:37.737695] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:18.433 [2024-07-22 12:53:37.737708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:19.366 12:53:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:19.366 12:53:38 -- common/autotest_common.sh@852 -- # return 0 00:31:19.366 12:53:38 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:31:19.623 Malloc0 00:31:19.623 12:53:38 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:31:19.882 Malloc1 00:31:19.882 12:53:39 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:31:19.882 12:53:39 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:19.882 12:53:39 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:31:19.882 12:53:39 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:31:19.882 12:53:39 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:19.882 12:53:39 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:31:19.882 12:53:39 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:31:19.882 12:53:39 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:19.882 12:53:39 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:31:19.882 12:53:39 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:19.882 12:53:39 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:19.882 12:53:39 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:19.882 12:53:39 -- bdev/nbd_common.sh@12 -- # local i 00:31:19.882 12:53:39 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:19.882 12:53:39 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:19.882 12:53:39 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:31:20.141 /dev/nbd0 00:31:20.141 12:53:39 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:31:20.141 12:53:39 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:31:20.141 12:53:39 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:31:20.141 12:53:39 -- common/autotest_common.sh@857 -- # local i 00:31:20.141 12:53:39 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:31:20.141 12:53:39 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:31:20.141 12:53:39 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:31:20.141 12:53:39 -- common/autotest_common.sh@861 -- # break 00:31:20.141 12:53:39 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:31:20.141 12:53:39 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:31:20.141 12:53:39 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:31:20.141 1+0 records in 00:31:20.141 1+0 records out 00:31:20.141 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000298259 s, 13.7 MB/s 00:31:20.141 12:53:39 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:31:20.141 12:53:39 -- common/autotest_common.sh@874 -- # size=4096 00:31:20.141 12:53:39 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:31:20.141 12:53:39 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:31:20.141 12:53:39 -- common/autotest_common.sh@877 -- # return 0 00:31:20.141 12:53:39 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:20.141 12:53:39 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:20.141 12:53:39 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:31:20.400 /dev/nbd1 00:31:20.400 12:53:39 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:31:20.400 12:53:39 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:31:20.400 12:53:39 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:31:20.400 12:53:39 -- common/autotest_common.sh@857 -- # local i 00:31:20.400 12:53:39 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:31:20.400 12:53:39 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:31:20.400 12:53:39 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:31:20.400 12:53:39 -- common/autotest_common.sh@861 -- # break 00:31:20.400 12:53:39 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:31:20.400 12:53:39 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:31:20.400 12:53:39 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:31:20.400 1+0 records in 00:31:20.400 1+0 records out 00:31:20.400 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000581254 s, 7.0 MB/s 00:31:20.400 12:53:39 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:31:20.400 12:53:39 -- common/autotest_common.sh@874 -- # size=4096 00:31:20.400 12:53:39 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:31:20.400 12:53:39 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:31:20.400 12:53:39 -- common/autotest_common.sh@877 -- # return 0 00:31:20.400 12:53:39 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:20.400 12:53:39 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:20.400 12:53:39 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:31:20.400 12:53:39 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:20.400 12:53:39 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:31:20.659 12:53:39 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:31:20.659 { 00:31:20.659 "bdev_name": "Malloc0", 00:31:20.659 "nbd_device": "/dev/nbd0" 00:31:20.659 }, 00:31:20.659 { 00:31:20.659 "bdev_name": "Malloc1", 00:31:20.659 "nbd_device": "/dev/nbd1" 00:31:20.659 } 00:31:20.659 ]' 00:31:20.659 12:53:39 -- bdev/nbd_common.sh@64 -- # echo '[ 00:31:20.659 { 00:31:20.659 "bdev_name": "Malloc0", 00:31:20.659 "nbd_device": "/dev/nbd0" 00:31:20.659 }, 00:31:20.659 { 00:31:20.659 "bdev_name": "Malloc1", 00:31:20.659 "nbd_device": "/dev/nbd1" 00:31:20.659 } 00:31:20.659 ]' 00:31:20.659 12:53:39 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:31:20.659 12:53:40 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:31:20.659 /dev/nbd1' 00:31:20.659 12:53:40 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:31:20.659 /dev/nbd1' 00:31:20.659 12:53:40 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:31:20.659 12:53:40 -- bdev/nbd_common.sh@65 -- # count=2 00:31:20.659 12:53:40 -- bdev/nbd_common.sh@66 -- # echo 2 00:31:20.659 12:53:40 -- bdev/nbd_common.sh@95 -- # count=2 00:31:20.659 12:53:40 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:31:20.659 12:53:40 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:31:20.659 12:53:40 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:20.659 12:53:40 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:31:20.659 12:53:40 -- bdev/nbd_common.sh@71 -- # local operation=write 00:31:20.659 12:53:40 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:31:20.659 12:53:40 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:31:20.659 12:53:40 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:31:20.659 256+0 records in 00:31:20.659 256+0 records out 00:31:20.659 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0065009 s, 161 MB/s 00:31:20.659 12:53:40 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:31:20.659 12:53:40 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:31:20.918 256+0 records in 00:31:20.918 256+0 records out 00:31:20.918 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0256841 s, 40.8 MB/s 00:31:20.918 12:53:40 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:31:20.918 12:53:40 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:31:20.918 256+0 records in 00:31:20.918 256+0 records out 00:31:20.918 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.026352 s, 39.8 MB/s 00:31:20.918 12:53:40 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:31:20.918 12:53:40 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:20.918 12:53:40 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:31:20.918 12:53:40 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:31:20.918 12:53:40 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:31:20.918 12:53:40 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:31:20.918 12:53:40 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:31:20.918 12:53:40 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:31:20.918 12:53:40 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:31:20.918 12:53:40 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:31:20.918 12:53:40 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:31:20.918 12:53:40 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:31:20.918 12:53:40 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:31:20.918 12:53:40 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:20.918 12:53:40 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:20.918 12:53:40 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:20.918 12:53:40 -- bdev/nbd_common.sh@51 -- # local i 00:31:20.918 12:53:40 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:20.918 12:53:40 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:31:21.177 12:53:40 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:21.177 12:53:40 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:21.177 12:53:40 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:21.177 12:53:40 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:21.177 12:53:40 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:21.177 12:53:40 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:21.177 12:53:40 -- bdev/nbd_common.sh@41 -- # break 00:31:21.177 12:53:40 -- bdev/nbd_common.sh@45 -- # return 0 00:31:21.177 12:53:40 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:21.177 12:53:40 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:31:21.435 12:53:40 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:31:21.435 12:53:40 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:31:21.435 12:53:40 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:31:21.435 12:53:40 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:21.435 12:53:40 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:21.435 12:53:40 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:31:21.435 12:53:40 -- bdev/nbd_common.sh@41 -- # break 00:31:21.435 12:53:40 -- bdev/nbd_common.sh@45 -- # return 0 00:31:21.435 12:53:40 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:31:21.435 12:53:40 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:21.435 12:53:40 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:31:21.693 12:53:40 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:31:21.693 12:53:40 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:31:21.693 12:53:40 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:31:21.693 12:53:41 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:31:21.693 12:53:41 -- bdev/nbd_common.sh@65 -- # echo '' 00:31:21.693 12:53:41 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:31:21.693 12:53:41 -- bdev/nbd_common.sh@65 -- # true 00:31:21.693 12:53:41 -- bdev/nbd_common.sh@65 -- # count=0 00:31:21.693 12:53:41 -- bdev/nbd_common.sh@66 -- # echo 0 00:31:21.693 12:53:41 -- bdev/nbd_common.sh@104 -- # count=0 00:31:21.693 12:53:41 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:31:21.693 12:53:41 -- bdev/nbd_common.sh@109 -- # return 0 00:31:21.693 12:53:41 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:31:21.952 12:53:41 -- event/event.sh@35 -- # sleep 3 00:31:22.211 [2024-07-22 12:53:41.500038] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:22.211 [2024-07-22 12:53:41.589727] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:22.211 [2024-07-22 12:53:41.589737] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:22.469 [2024-07-22 12:53:41.646214] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:31:22.469 [2024-07-22 12:53:41.646286] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:31:24.997 12:53:44 -- event/event.sh@23 -- # for i in {0..2} 00:31:24.997 spdk_app_start Round 1 00:31:24.997 12:53:44 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:31:24.997 12:53:44 -- event/event.sh@25 -- # waitforlisten 68575 /var/tmp/spdk-nbd.sock 00:31:24.997 12:53:44 -- common/autotest_common.sh@819 -- # '[' -z 68575 ']' 00:31:24.997 12:53:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:31:24.997 12:53:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:24.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:31:24.997 12:53:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:31:24.997 12:53:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:24.997 12:53:44 -- common/autotest_common.sh@10 -- # set +x 00:31:25.255 12:53:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:25.255 12:53:44 -- common/autotest_common.sh@852 -- # return 0 00:31:25.255 12:53:44 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:31:25.513 Malloc0 00:31:25.513 12:53:44 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:31:25.771 Malloc1 00:31:25.772 12:53:45 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:31:25.772 12:53:45 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:25.772 12:53:45 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:31:25.772 12:53:45 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:31:25.772 12:53:45 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:25.772 12:53:45 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:31:25.772 12:53:45 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:31:25.772 12:53:45 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:25.772 12:53:45 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:31:25.772 12:53:45 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:25.772 12:53:45 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:25.772 12:53:45 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:25.772 12:53:45 -- bdev/nbd_common.sh@12 -- # local i 00:31:25.772 12:53:45 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:25.772 12:53:45 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:25.772 12:53:45 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:31:26.030 /dev/nbd0 00:31:26.030 12:53:45 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:31:26.030 12:53:45 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:31:26.030 12:53:45 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:31:26.030 12:53:45 -- common/autotest_common.sh@857 -- # local i 00:31:26.030 12:53:45 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:31:26.030 12:53:45 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:31:26.030 12:53:45 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:31:26.030 12:53:45 -- common/autotest_common.sh@861 -- # break 00:31:26.030 12:53:45 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:31:26.030 12:53:45 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:31:26.030 12:53:45 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:31:26.030 1+0 records in 00:31:26.030 1+0 records out 00:31:26.030 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000298083 s, 13.7 MB/s 00:31:26.030 12:53:45 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:31:26.030 12:53:45 -- common/autotest_common.sh@874 -- # size=4096 00:31:26.030 12:53:45 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:31:26.030 12:53:45 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:31:26.030 12:53:45 -- common/autotest_common.sh@877 -- # return 0 00:31:26.031 12:53:45 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:26.031 12:53:45 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:26.031 12:53:45 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:31:26.289 /dev/nbd1 00:31:26.289 12:53:45 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:31:26.289 12:53:45 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:31:26.289 12:53:45 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:31:26.289 12:53:45 -- common/autotest_common.sh@857 -- # local i 00:31:26.289 12:53:45 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:31:26.289 12:53:45 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:31:26.289 12:53:45 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:31:26.289 12:53:45 -- common/autotest_common.sh@861 -- # break 00:31:26.289 12:53:45 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:31:26.289 12:53:45 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:31:26.289 12:53:45 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:31:26.289 1+0 records in 00:31:26.289 1+0 records out 00:31:26.289 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00030802 s, 13.3 MB/s 00:31:26.547 12:53:45 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:31:26.547 12:53:45 -- common/autotest_common.sh@874 -- # size=4096 00:31:26.547 12:53:45 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:31:26.547 12:53:45 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:31:26.547 12:53:45 -- common/autotest_common.sh@877 -- # return 0 00:31:26.547 12:53:45 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:26.547 12:53:45 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:26.547 12:53:45 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:31:26.547 12:53:45 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:26.547 12:53:45 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:31:26.806 12:53:45 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:31:26.806 { 00:31:26.806 "bdev_name": "Malloc0", 00:31:26.806 "nbd_device": "/dev/nbd0" 00:31:26.806 }, 00:31:26.806 { 00:31:26.806 "bdev_name": "Malloc1", 00:31:26.806 "nbd_device": "/dev/nbd1" 00:31:26.806 } 00:31:26.806 ]' 00:31:26.806 12:53:45 -- bdev/nbd_common.sh@64 -- # echo '[ 00:31:26.806 { 00:31:26.806 "bdev_name": "Malloc0", 00:31:26.806 "nbd_device": "/dev/nbd0" 00:31:26.806 }, 00:31:26.806 { 00:31:26.806 "bdev_name": "Malloc1", 00:31:26.806 "nbd_device": "/dev/nbd1" 00:31:26.806 } 00:31:26.806 ]' 00:31:26.806 12:53:45 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:31:26.806 12:53:46 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:31:26.806 /dev/nbd1' 00:31:26.806 12:53:46 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:31:26.806 /dev/nbd1' 00:31:26.806 12:53:46 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:31:26.806 12:53:46 -- bdev/nbd_common.sh@65 -- # count=2 00:31:26.806 12:53:46 -- bdev/nbd_common.sh@66 -- # echo 2 00:31:26.806 12:53:46 -- bdev/nbd_common.sh@95 -- # count=2 00:31:26.806 12:53:46 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:31:26.806 12:53:46 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:31:26.806 12:53:46 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:26.806 12:53:46 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:31:26.806 12:53:46 -- bdev/nbd_common.sh@71 -- # local operation=write 00:31:26.806 12:53:46 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:31:26.806 12:53:46 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:31:26.806 12:53:46 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:31:26.806 256+0 records in 00:31:26.806 256+0 records out 00:31:26.806 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0119582 s, 87.7 MB/s 00:31:26.806 12:53:46 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:31:26.806 12:53:46 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:31:26.806 256+0 records in 00:31:26.806 256+0 records out 00:31:26.806 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0246735 s, 42.5 MB/s 00:31:26.806 12:53:46 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:31:26.806 12:53:46 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:31:26.806 256+0 records in 00:31:26.806 256+0 records out 00:31:26.806 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0308754 s, 34.0 MB/s 00:31:26.806 12:53:46 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:31:26.806 12:53:46 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:26.806 12:53:46 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:31:26.806 12:53:46 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:31:26.806 12:53:46 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:31:26.806 12:53:46 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:31:26.806 12:53:46 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:31:26.806 12:53:46 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:31:26.806 12:53:46 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:31:26.806 12:53:46 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:31:26.806 12:53:46 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:31:26.806 12:53:46 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:31:26.806 12:53:46 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:31:26.806 12:53:46 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:26.806 12:53:46 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:26.806 12:53:46 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:26.806 12:53:46 -- bdev/nbd_common.sh@51 -- # local i 00:31:26.806 12:53:46 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:26.806 12:53:46 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:31:27.064 12:53:46 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:27.064 12:53:46 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:27.064 12:53:46 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:27.064 12:53:46 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:27.064 12:53:46 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:27.064 12:53:46 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:27.065 12:53:46 -- bdev/nbd_common.sh@41 -- # break 00:31:27.065 12:53:46 -- bdev/nbd_common.sh@45 -- # return 0 00:31:27.065 12:53:46 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:27.065 12:53:46 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:31:27.323 12:53:46 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:31:27.323 12:53:46 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:31:27.323 12:53:46 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:31:27.323 12:53:46 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:27.323 12:53:46 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:27.323 12:53:46 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:31:27.323 12:53:46 -- bdev/nbd_common.sh@41 -- # break 00:31:27.323 12:53:46 -- bdev/nbd_common.sh@45 -- # return 0 00:31:27.323 12:53:46 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:31:27.323 12:53:46 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:27.323 12:53:46 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:31:27.581 12:53:46 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:31:27.582 12:53:46 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:31:27.582 12:53:46 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:31:27.582 12:53:46 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:31:27.582 12:53:46 -- bdev/nbd_common.sh@65 -- # echo '' 00:31:27.582 12:53:46 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:31:27.582 12:53:46 -- bdev/nbd_common.sh@65 -- # true 00:31:27.582 12:53:46 -- bdev/nbd_common.sh@65 -- # count=0 00:31:27.582 12:53:46 -- bdev/nbd_common.sh@66 -- # echo 0 00:31:27.582 12:53:46 -- bdev/nbd_common.sh@104 -- # count=0 00:31:27.582 12:53:46 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:31:27.582 12:53:46 -- bdev/nbd_common.sh@109 -- # return 0 00:31:27.582 12:53:46 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:31:28.149 12:53:47 -- event/event.sh@35 -- # sleep 3 00:31:28.149 [2024-07-22 12:53:47.460573] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:28.149 [2024-07-22 12:53:47.525582] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:28.149 [2024-07-22 12:53:47.525592] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:28.407 [2024-07-22 12:53:47.581347] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:31:28.407 [2024-07-22 12:53:47.581407] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:31:30.938 12:53:50 -- event/event.sh@23 -- # for i in {0..2} 00:31:30.938 spdk_app_start Round 2 00:31:30.938 12:53:50 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:31:30.938 12:53:50 -- event/event.sh@25 -- # waitforlisten 68575 /var/tmp/spdk-nbd.sock 00:31:30.938 12:53:50 -- common/autotest_common.sh@819 -- # '[' -z 68575 ']' 00:31:30.938 12:53:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:31:30.938 12:53:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:30.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:31:30.938 12:53:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:31:30.938 12:53:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:30.938 12:53:50 -- common/autotest_common.sh@10 -- # set +x 00:31:31.197 12:53:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:31.197 12:53:50 -- common/autotest_common.sh@852 -- # return 0 00:31:31.197 12:53:50 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:31:31.455 Malloc0 00:31:31.455 12:53:50 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:31:31.713 Malloc1 00:31:31.713 12:53:51 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:31:31.713 12:53:51 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:31.713 12:53:51 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:31:31.713 12:53:51 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:31:31.713 12:53:51 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:31.713 12:53:51 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:31:31.713 12:53:51 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:31:31.713 12:53:51 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:31.713 12:53:51 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:31:31.713 12:53:51 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:31.713 12:53:51 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:31.713 12:53:51 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:31.713 12:53:51 -- bdev/nbd_common.sh@12 -- # local i 00:31:31.713 12:53:51 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:31.713 12:53:51 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:31.713 12:53:51 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:31:31.972 /dev/nbd0 00:31:31.972 12:53:51 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:31:31.972 12:53:51 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:31:31.972 12:53:51 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:31:31.972 12:53:51 -- common/autotest_common.sh@857 -- # local i 00:31:31.972 12:53:51 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:31:31.972 12:53:51 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:31:31.972 12:53:51 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:31:31.972 12:53:51 -- common/autotest_common.sh@861 -- # break 00:31:31.972 12:53:51 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:31:31.972 12:53:51 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:31:31.972 12:53:51 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:31:31.972 1+0 records in 00:31:31.972 1+0 records out 00:31:31.972 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000298846 s, 13.7 MB/s 00:31:31.972 12:53:51 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:31:31.972 12:53:51 -- common/autotest_common.sh@874 -- # size=4096 00:31:31.972 12:53:51 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:31:31.972 12:53:51 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:31:31.972 12:53:51 -- common/autotest_common.sh@877 -- # return 0 00:31:31.972 12:53:51 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:31.972 12:53:51 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:31.972 12:53:51 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:31:32.230 /dev/nbd1 00:31:32.230 12:53:51 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:31:32.230 12:53:51 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:31:32.230 12:53:51 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:31:32.230 12:53:51 -- common/autotest_common.sh@857 -- # local i 00:31:32.230 12:53:51 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:31:32.230 12:53:51 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:31:32.231 12:53:51 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:31:32.231 12:53:51 -- common/autotest_common.sh@861 -- # break 00:31:32.231 12:53:51 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:31:32.231 12:53:51 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:31:32.231 12:53:51 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:31:32.231 1+0 records in 00:31:32.231 1+0 records out 00:31:32.231 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000323224 s, 12.7 MB/s 00:31:32.231 12:53:51 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:31:32.231 12:53:51 -- common/autotest_common.sh@874 -- # size=4096 00:31:32.231 12:53:51 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:31:32.231 12:53:51 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:31:32.231 12:53:51 -- common/autotest_common.sh@877 -- # return 0 00:31:32.231 12:53:51 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:32.231 12:53:51 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:32.231 12:53:51 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:31:32.231 12:53:51 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:32.231 12:53:51 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:31:32.488 12:53:51 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:31:32.488 { 00:31:32.488 "bdev_name": "Malloc0", 00:31:32.488 "nbd_device": "/dev/nbd0" 00:31:32.488 }, 00:31:32.488 { 00:31:32.488 "bdev_name": "Malloc1", 00:31:32.488 "nbd_device": "/dev/nbd1" 00:31:32.488 } 00:31:32.488 ]' 00:31:32.488 12:53:51 -- bdev/nbd_common.sh@64 -- # echo '[ 00:31:32.488 { 00:31:32.488 "bdev_name": "Malloc0", 00:31:32.488 "nbd_device": "/dev/nbd0" 00:31:32.488 }, 00:31:32.488 { 00:31:32.488 "bdev_name": "Malloc1", 00:31:32.488 "nbd_device": "/dev/nbd1" 00:31:32.488 } 00:31:32.488 ]' 00:31:32.488 12:53:51 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:31:32.745 12:53:51 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:31:32.745 /dev/nbd1' 00:31:32.745 12:53:51 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:31:32.745 /dev/nbd1' 00:31:32.745 12:53:51 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:31:32.745 12:53:51 -- bdev/nbd_common.sh@65 -- # count=2 00:31:32.745 12:53:51 -- bdev/nbd_common.sh@66 -- # echo 2 00:31:32.745 12:53:51 -- bdev/nbd_common.sh@95 -- # count=2 00:31:32.745 12:53:51 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:31:32.745 12:53:51 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:31:32.745 12:53:51 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:32.746 12:53:51 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:31:32.746 12:53:51 -- bdev/nbd_common.sh@71 -- # local operation=write 00:31:32.746 12:53:51 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:31:32.746 12:53:51 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:31:32.746 12:53:51 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:31:32.746 256+0 records in 00:31:32.746 256+0 records out 00:31:32.746 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00952615 s, 110 MB/s 00:31:32.746 12:53:51 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:31:32.746 12:53:51 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:31:32.746 256+0 records in 00:31:32.746 256+0 records out 00:31:32.746 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0250373 s, 41.9 MB/s 00:31:32.746 12:53:51 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:31:32.746 12:53:51 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:31:32.746 256+0 records in 00:31:32.746 256+0 records out 00:31:32.746 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.026528 s, 39.5 MB/s 00:31:32.746 12:53:52 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:31:32.746 12:53:52 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:32.746 12:53:52 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:31:32.746 12:53:52 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:31:32.746 12:53:52 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:31:32.746 12:53:52 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:31:32.746 12:53:52 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:31:32.746 12:53:52 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:31:32.746 12:53:52 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:31:32.746 12:53:52 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:31:32.746 12:53:52 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:31:32.746 12:53:52 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:31:32.746 12:53:52 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:31:32.746 12:53:52 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:32.746 12:53:52 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:32.746 12:53:52 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:32.746 12:53:52 -- bdev/nbd_common.sh@51 -- # local i 00:31:32.746 12:53:52 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:32.746 12:53:52 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:31:33.003 12:53:52 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:33.003 12:53:52 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:33.003 12:53:52 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:33.003 12:53:52 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:33.003 12:53:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:33.003 12:53:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:33.003 12:53:52 -- bdev/nbd_common.sh@41 -- # break 00:31:33.003 12:53:52 -- bdev/nbd_common.sh@45 -- # return 0 00:31:33.003 12:53:52 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:33.003 12:53:52 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:31:33.261 12:53:52 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:31:33.261 12:53:52 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:31:33.261 12:53:52 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:31:33.261 12:53:52 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:33.261 12:53:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:33.261 12:53:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:31:33.261 12:53:52 -- bdev/nbd_common.sh@41 -- # break 00:31:33.261 12:53:52 -- bdev/nbd_common.sh@45 -- # return 0 00:31:33.261 12:53:52 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:31:33.261 12:53:52 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:33.261 12:53:52 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:31:33.519 12:53:52 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:31:33.519 12:53:52 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:31:33.519 12:53:52 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:31:33.519 12:53:52 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:31:33.519 12:53:52 -- bdev/nbd_common.sh@65 -- # echo '' 00:31:33.519 12:53:52 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:31:33.519 12:53:52 -- bdev/nbd_common.sh@65 -- # true 00:31:33.519 12:53:52 -- bdev/nbd_common.sh@65 -- # count=0 00:31:33.519 12:53:52 -- bdev/nbd_common.sh@66 -- # echo 0 00:31:33.519 12:53:52 -- bdev/nbd_common.sh@104 -- # count=0 00:31:33.519 12:53:52 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:31:33.519 12:53:52 -- bdev/nbd_common.sh@109 -- # return 0 00:31:33.519 12:53:52 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:31:33.776 12:53:53 -- event/event.sh@35 -- # sleep 3 00:31:34.034 [2024-07-22 12:53:53.351181] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:34.034 [2024-07-22 12:53:53.416601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:34.034 [2024-07-22 12:53:53.416612] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:34.292 [2024-07-22 12:53:53.471593] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:31:34.292 [2024-07-22 12:53:53.471651] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:31:36.823 12:53:56 -- event/event.sh@38 -- # waitforlisten 68575 /var/tmp/spdk-nbd.sock 00:31:36.823 12:53:56 -- common/autotest_common.sh@819 -- # '[' -z 68575 ']' 00:31:36.823 12:53:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:31:36.823 12:53:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:36.823 12:53:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:31:36.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:31:36.823 12:53:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:36.823 12:53:56 -- common/autotest_common.sh@10 -- # set +x 00:31:37.082 12:53:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:37.082 12:53:56 -- common/autotest_common.sh@852 -- # return 0 00:31:37.082 12:53:56 -- event/event.sh@39 -- # killprocess 68575 00:31:37.082 12:53:56 -- common/autotest_common.sh@926 -- # '[' -z 68575 ']' 00:31:37.082 12:53:56 -- common/autotest_common.sh@930 -- # kill -0 68575 00:31:37.082 12:53:56 -- common/autotest_common.sh@931 -- # uname 00:31:37.082 12:53:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:37.082 12:53:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 68575 00:31:37.082 12:53:56 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:31:37.082 12:53:56 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:31:37.082 killing process with pid 68575 00:31:37.082 12:53:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 68575' 00:31:37.082 12:53:56 -- common/autotest_common.sh@945 -- # kill 68575 00:31:37.082 12:53:56 -- common/autotest_common.sh@950 -- # wait 68575 00:31:37.340 spdk_app_start is called in Round 0. 00:31:37.340 Shutdown signal received, stop current app iteration 00:31:37.340 Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 reinitialization... 00:31:37.340 spdk_app_start is called in Round 1. 00:31:37.340 Shutdown signal received, stop current app iteration 00:31:37.340 Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 reinitialization... 00:31:37.340 spdk_app_start is called in Round 2. 00:31:37.340 Shutdown signal received, stop current app iteration 00:31:37.340 Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 reinitialization... 00:31:37.340 spdk_app_start is called in Round 3. 00:31:37.340 Shutdown signal received, stop current app iteration 00:31:37.340 12:53:56 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:31:37.340 12:53:56 -- event/event.sh@42 -- # return 0 00:31:37.340 00:31:37.340 real 0m19.202s 00:31:37.340 user 0m43.184s 00:31:37.340 sys 0m3.086s 00:31:37.340 12:53:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:37.340 12:53:56 -- common/autotest_common.sh@10 -- # set +x 00:31:37.340 ************************************ 00:31:37.340 END TEST app_repeat 00:31:37.340 ************************************ 00:31:37.340 12:53:56 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:31:37.340 12:53:56 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:31:37.340 12:53:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:37.340 12:53:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:37.340 12:53:56 -- common/autotest_common.sh@10 -- # set +x 00:31:37.340 ************************************ 00:31:37.340 START TEST cpu_locks 00:31:37.340 ************************************ 00:31:37.340 12:53:56 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:31:37.598 * Looking for test storage... 00:31:37.598 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:31:37.598 12:53:56 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:31:37.598 12:53:56 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:31:37.598 12:53:56 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:31:37.598 12:53:56 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:31:37.598 12:53:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:37.598 12:53:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:37.598 12:53:56 -- common/autotest_common.sh@10 -- # set +x 00:31:37.598 ************************************ 00:31:37.598 START TEST default_locks 00:31:37.598 ************************************ 00:31:37.598 12:53:56 -- common/autotest_common.sh@1104 -- # default_locks 00:31:37.598 12:53:56 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=69205 00:31:37.598 12:53:56 -- event/cpu_locks.sh@47 -- # waitforlisten 69205 00:31:37.598 12:53:56 -- common/autotest_common.sh@819 -- # '[' -z 69205 ']' 00:31:37.598 12:53:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:37.598 12:53:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:37.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:37.598 12:53:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:37.598 12:53:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:37.598 12:53:56 -- common/autotest_common.sh@10 -- # set +x 00:31:37.598 12:53:56 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:31:37.598 [2024-07-22 12:53:56.853621] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:31:37.598 [2024-07-22 12:53:56.853727] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69205 ] 00:31:37.598 [2024-07-22 12:53:56.985312] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:37.857 [2024-07-22 12:53:57.071816] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:37.857 [2024-07-22 12:53:57.071997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:38.790 12:53:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:38.790 12:53:57 -- common/autotest_common.sh@852 -- # return 0 00:31:38.790 12:53:57 -- event/cpu_locks.sh@49 -- # locks_exist 69205 00:31:38.790 12:53:57 -- event/cpu_locks.sh@22 -- # lslocks -p 69205 00:31:38.790 12:53:57 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:31:39.048 12:53:58 -- event/cpu_locks.sh@50 -- # killprocess 69205 00:31:39.048 12:53:58 -- common/autotest_common.sh@926 -- # '[' -z 69205 ']' 00:31:39.048 12:53:58 -- common/autotest_common.sh@930 -- # kill -0 69205 00:31:39.048 12:53:58 -- common/autotest_common.sh@931 -- # uname 00:31:39.048 12:53:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:39.048 12:53:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69205 00:31:39.048 12:53:58 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:31:39.048 12:53:58 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:31:39.048 killing process with pid 69205 00:31:39.048 12:53:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69205' 00:31:39.048 12:53:58 -- common/autotest_common.sh@945 -- # kill 69205 00:31:39.048 12:53:58 -- common/autotest_common.sh@950 -- # wait 69205 00:31:39.615 12:53:58 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 69205 00:31:39.615 12:53:58 -- common/autotest_common.sh@640 -- # local es=0 00:31:39.615 12:53:58 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 69205 00:31:39.615 12:53:58 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:31:39.615 12:53:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:31:39.615 12:53:58 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:31:39.615 12:53:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:31:39.615 12:53:58 -- common/autotest_common.sh@643 -- # waitforlisten 69205 00:31:39.615 12:53:58 -- common/autotest_common.sh@819 -- # '[' -z 69205 ']' 00:31:39.615 12:53:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:39.615 12:53:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:39.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:39.615 12:53:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:39.615 12:53:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:39.615 12:53:58 -- common/autotest_common.sh@10 -- # set +x 00:31:39.615 ERROR: process (pid: 69205) is no longer running 00:31:39.615 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (69205) - No such process 00:31:39.615 12:53:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:39.615 12:53:58 -- common/autotest_common.sh@852 -- # return 1 00:31:39.615 12:53:58 -- common/autotest_common.sh@643 -- # es=1 00:31:39.615 12:53:58 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:31:39.615 12:53:58 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:31:39.615 12:53:58 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:31:39.615 12:53:58 -- event/cpu_locks.sh@54 -- # no_locks 00:31:39.615 12:53:58 -- event/cpu_locks.sh@26 -- # lock_files=() 00:31:39.615 12:53:58 -- event/cpu_locks.sh@26 -- # local lock_files 00:31:39.615 12:53:58 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:31:39.615 00:31:39.615 real 0m1.935s 00:31:39.615 user 0m2.095s 00:31:39.615 sys 0m0.571s 00:31:39.615 12:53:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:39.615 ************************************ 00:31:39.615 END TEST default_locks 00:31:39.615 ************************************ 00:31:39.615 12:53:58 -- common/autotest_common.sh@10 -- # set +x 00:31:39.615 12:53:58 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:31:39.615 12:53:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:39.615 12:53:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:39.615 12:53:58 -- common/autotest_common.sh@10 -- # set +x 00:31:39.615 ************************************ 00:31:39.615 START TEST default_locks_via_rpc 00:31:39.615 ************************************ 00:31:39.615 12:53:58 -- common/autotest_common.sh@1104 -- # default_locks_via_rpc 00:31:39.615 12:53:58 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=69268 00:31:39.615 12:53:58 -- event/cpu_locks.sh@63 -- # waitforlisten 69268 00:31:39.615 12:53:58 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:31:39.615 12:53:58 -- common/autotest_common.sh@819 -- # '[' -z 69268 ']' 00:31:39.615 12:53:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:39.615 12:53:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:39.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:39.615 12:53:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:39.615 12:53:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:39.615 12:53:58 -- common/autotest_common.sh@10 -- # set +x 00:31:39.615 [2024-07-22 12:53:58.847226] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:31:39.615 [2024-07-22 12:53:58.847340] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69268 ] 00:31:39.615 [2024-07-22 12:53:58.987910] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:39.874 [2024-07-22 12:53:59.077723] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:39.874 [2024-07-22 12:53:59.077889] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:40.441 12:53:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:40.441 12:53:59 -- common/autotest_common.sh@852 -- # return 0 00:31:40.441 12:53:59 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:31:40.441 12:53:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:40.441 12:53:59 -- common/autotest_common.sh@10 -- # set +x 00:31:40.441 12:53:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:40.441 12:53:59 -- event/cpu_locks.sh@67 -- # no_locks 00:31:40.441 12:53:59 -- event/cpu_locks.sh@26 -- # lock_files=() 00:31:40.441 12:53:59 -- event/cpu_locks.sh@26 -- # local lock_files 00:31:40.441 12:53:59 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:31:40.441 12:53:59 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:31:40.441 12:53:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:40.441 12:53:59 -- common/autotest_common.sh@10 -- # set +x 00:31:40.441 12:53:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:40.441 12:53:59 -- event/cpu_locks.sh@71 -- # locks_exist 69268 00:31:40.441 12:53:59 -- event/cpu_locks.sh@22 -- # lslocks -p 69268 00:31:40.441 12:53:59 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:31:41.007 12:54:00 -- event/cpu_locks.sh@73 -- # killprocess 69268 00:31:41.007 12:54:00 -- common/autotest_common.sh@926 -- # '[' -z 69268 ']' 00:31:41.007 12:54:00 -- common/autotest_common.sh@930 -- # kill -0 69268 00:31:41.007 12:54:00 -- common/autotest_common.sh@931 -- # uname 00:31:41.007 12:54:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:41.007 12:54:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69268 00:31:41.007 12:54:00 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:31:41.007 killing process with pid 69268 00:31:41.007 12:54:00 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:31:41.007 12:54:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69268' 00:31:41.007 12:54:00 -- common/autotest_common.sh@945 -- # kill 69268 00:31:41.007 12:54:00 -- common/autotest_common.sh@950 -- # wait 69268 00:31:41.574 00:31:41.574 real 0m1.921s 00:31:41.574 user 0m2.055s 00:31:41.574 sys 0m0.562s 00:31:41.574 12:54:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:41.574 12:54:00 -- common/autotest_common.sh@10 -- # set +x 00:31:41.574 ************************************ 00:31:41.574 END TEST default_locks_via_rpc 00:31:41.574 ************************************ 00:31:41.574 12:54:00 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:31:41.574 12:54:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:41.574 12:54:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:41.574 12:54:00 -- common/autotest_common.sh@10 -- # set +x 00:31:41.574 ************************************ 00:31:41.574 START TEST non_locking_app_on_locked_coremask 00:31:41.574 ************************************ 00:31:41.574 12:54:00 -- common/autotest_common.sh@1104 -- # non_locking_app_on_locked_coremask 00:31:41.574 12:54:00 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=69333 00:31:41.574 12:54:00 -- event/cpu_locks.sh@81 -- # waitforlisten 69333 /var/tmp/spdk.sock 00:31:41.574 12:54:00 -- common/autotest_common.sh@819 -- # '[' -z 69333 ']' 00:31:41.574 12:54:00 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:31:41.574 12:54:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:41.574 12:54:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:41.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:41.574 12:54:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:41.574 12:54:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:41.574 12:54:00 -- common/autotest_common.sh@10 -- # set +x 00:31:41.574 [2024-07-22 12:54:00.819794] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:31:41.574 [2024-07-22 12:54:00.819905] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69333 ] 00:31:41.574 [2024-07-22 12:54:00.959734] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:41.833 [2024-07-22 12:54:01.047617] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:41.833 [2024-07-22 12:54:01.047770] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:42.767 12:54:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:42.767 12:54:01 -- common/autotest_common.sh@852 -- # return 0 00:31:42.767 12:54:01 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=69361 00:31:42.767 12:54:01 -- event/cpu_locks.sh@85 -- # waitforlisten 69361 /var/tmp/spdk2.sock 00:31:42.767 12:54:01 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:31:42.767 12:54:01 -- common/autotest_common.sh@819 -- # '[' -z 69361 ']' 00:31:42.767 12:54:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:31:42.767 12:54:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:42.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:31:42.767 12:54:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:31:42.767 12:54:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:42.767 12:54:01 -- common/autotest_common.sh@10 -- # set +x 00:31:42.767 [2024-07-22 12:54:01.914680] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:31:42.767 [2024-07-22 12:54:01.914766] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69361 ] 00:31:42.767 [2024-07-22 12:54:02.058620] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:31:42.767 [2024-07-22 12:54:02.058669] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:43.027 [2024-07-22 12:54:02.251040] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:43.027 [2024-07-22 12:54:02.254347] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:43.594 12:54:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:43.594 12:54:02 -- common/autotest_common.sh@852 -- # return 0 00:31:43.594 12:54:02 -- event/cpu_locks.sh@87 -- # locks_exist 69333 00:31:43.594 12:54:02 -- event/cpu_locks.sh@22 -- # lslocks -p 69333 00:31:43.594 12:54:02 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:31:44.529 12:54:03 -- event/cpu_locks.sh@89 -- # killprocess 69333 00:31:44.529 12:54:03 -- common/autotest_common.sh@926 -- # '[' -z 69333 ']' 00:31:44.529 12:54:03 -- common/autotest_common.sh@930 -- # kill -0 69333 00:31:44.529 12:54:03 -- common/autotest_common.sh@931 -- # uname 00:31:44.529 12:54:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:44.529 12:54:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69333 00:31:44.529 12:54:03 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:31:44.529 12:54:03 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:31:44.529 killing process with pid 69333 00:31:44.529 12:54:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69333' 00:31:44.529 12:54:03 -- common/autotest_common.sh@945 -- # kill 69333 00:31:44.529 12:54:03 -- common/autotest_common.sh@950 -- # wait 69333 00:31:45.464 12:54:04 -- event/cpu_locks.sh@90 -- # killprocess 69361 00:31:45.464 12:54:04 -- common/autotest_common.sh@926 -- # '[' -z 69361 ']' 00:31:45.464 12:54:04 -- common/autotest_common.sh@930 -- # kill -0 69361 00:31:45.464 12:54:04 -- common/autotest_common.sh@931 -- # uname 00:31:45.464 12:54:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:45.464 12:54:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69361 00:31:45.464 12:54:04 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:31:45.464 12:54:04 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:31:45.464 killing process with pid 69361 00:31:45.464 12:54:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69361' 00:31:45.464 12:54:04 -- common/autotest_common.sh@945 -- # kill 69361 00:31:45.464 12:54:04 -- common/autotest_common.sh@950 -- # wait 69361 00:31:45.723 00:31:45.723 real 0m4.279s 00:31:45.723 user 0m4.805s 00:31:45.723 sys 0m1.197s 00:31:45.723 12:54:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:45.723 ************************************ 00:31:45.723 END TEST non_locking_app_on_locked_coremask 00:31:45.723 ************************************ 00:31:45.723 12:54:05 -- common/autotest_common.sh@10 -- # set +x 00:31:45.723 12:54:05 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:31:45.723 12:54:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:45.723 12:54:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:45.723 12:54:05 -- common/autotest_common.sh@10 -- # set +x 00:31:45.723 ************************************ 00:31:45.723 START TEST locking_app_on_unlocked_coremask 00:31:45.723 ************************************ 00:31:45.723 12:54:05 -- common/autotest_common.sh@1104 -- # locking_app_on_unlocked_coremask 00:31:45.723 12:54:05 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=69445 00:31:45.723 12:54:05 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:31:45.723 12:54:05 -- event/cpu_locks.sh@99 -- # waitforlisten 69445 /var/tmp/spdk.sock 00:31:45.723 12:54:05 -- common/autotest_common.sh@819 -- # '[' -z 69445 ']' 00:31:45.723 12:54:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:45.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:45.723 12:54:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:45.723 12:54:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:45.723 12:54:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:45.723 12:54:05 -- common/autotest_common.sh@10 -- # set +x 00:31:45.981 [2024-07-22 12:54:05.152637] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:31:45.982 [2024-07-22 12:54:05.152727] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69445 ] 00:31:45.982 [2024-07-22 12:54:05.289492] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:31:45.982 [2024-07-22 12:54:05.289536] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:45.982 [2024-07-22 12:54:05.384020] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:45.982 [2024-07-22 12:54:05.384220] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:46.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:31:46.917 12:54:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:46.917 12:54:06 -- common/autotest_common.sh@852 -- # return 0 00:31:46.917 12:54:06 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:31:46.917 12:54:06 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=69473 00:31:46.917 12:54:06 -- event/cpu_locks.sh@103 -- # waitforlisten 69473 /var/tmp/spdk2.sock 00:31:46.917 12:54:06 -- common/autotest_common.sh@819 -- # '[' -z 69473 ']' 00:31:46.917 12:54:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:31:46.917 12:54:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:46.917 12:54:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:31:46.917 12:54:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:46.917 12:54:06 -- common/autotest_common.sh@10 -- # set +x 00:31:46.917 [2024-07-22 12:54:06.189244] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:31:46.917 [2024-07-22 12:54:06.189490] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69473 ] 00:31:46.917 [2024-07-22 12:54:06.328257] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:47.175 [2024-07-22 12:54:06.503252] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:47.175 [2024-07-22 12:54:06.503438] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:47.851 12:54:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:47.851 12:54:07 -- common/autotest_common.sh@852 -- # return 0 00:31:47.851 12:54:07 -- event/cpu_locks.sh@105 -- # locks_exist 69473 00:31:47.851 12:54:07 -- event/cpu_locks.sh@22 -- # lslocks -p 69473 00:31:47.851 12:54:07 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:31:48.822 12:54:07 -- event/cpu_locks.sh@107 -- # killprocess 69445 00:31:48.822 12:54:07 -- common/autotest_common.sh@926 -- # '[' -z 69445 ']' 00:31:48.822 12:54:07 -- common/autotest_common.sh@930 -- # kill -0 69445 00:31:48.822 12:54:08 -- common/autotest_common.sh@931 -- # uname 00:31:48.822 12:54:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:48.822 12:54:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69445 00:31:48.822 killing process with pid 69445 00:31:48.822 12:54:08 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:31:48.822 12:54:08 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:31:48.822 12:54:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69445' 00:31:48.822 12:54:08 -- common/autotest_common.sh@945 -- # kill 69445 00:31:48.822 12:54:08 -- common/autotest_common.sh@950 -- # wait 69445 00:31:49.391 12:54:08 -- event/cpu_locks.sh@108 -- # killprocess 69473 00:31:49.391 12:54:08 -- common/autotest_common.sh@926 -- # '[' -z 69473 ']' 00:31:49.391 12:54:08 -- common/autotest_common.sh@930 -- # kill -0 69473 00:31:49.391 12:54:08 -- common/autotest_common.sh@931 -- # uname 00:31:49.649 12:54:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:49.649 12:54:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69473 00:31:49.649 killing process with pid 69473 00:31:49.649 12:54:08 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:31:49.649 12:54:08 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:31:49.649 12:54:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69473' 00:31:49.649 12:54:08 -- common/autotest_common.sh@945 -- # kill 69473 00:31:49.649 12:54:08 -- common/autotest_common.sh@950 -- # wait 69473 00:31:49.907 ************************************ 00:31:49.908 END TEST locking_app_on_unlocked_coremask 00:31:49.908 ************************************ 00:31:49.908 00:31:49.908 real 0m4.118s 00:31:49.908 user 0m4.549s 00:31:49.908 sys 0m1.167s 00:31:49.908 12:54:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:49.908 12:54:09 -- common/autotest_common.sh@10 -- # set +x 00:31:49.908 12:54:09 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:31:49.908 12:54:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:49.908 12:54:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:49.908 12:54:09 -- common/autotest_common.sh@10 -- # set +x 00:31:49.908 ************************************ 00:31:49.908 START TEST locking_app_on_locked_coremask 00:31:49.908 ************************************ 00:31:49.908 12:54:09 -- common/autotest_common.sh@1104 -- # locking_app_on_locked_coremask 00:31:49.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:49.908 12:54:09 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=69558 00:31:49.908 12:54:09 -- event/cpu_locks.sh@116 -- # waitforlisten 69558 /var/tmp/spdk.sock 00:31:49.908 12:54:09 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:31:49.908 12:54:09 -- common/autotest_common.sh@819 -- # '[' -z 69558 ']' 00:31:49.908 12:54:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:49.908 12:54:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:49.908 12:54:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:49.908 12:54:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:49.908 12:54:09 -- common/autotest_common.sh@10 -- # set +x 00:31:49.908 [2024-07-22 12:54:09.323229] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:31:49.908 [2024-07-22 12:54:09.323521] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69558 ] 00:31:50.166 [2024-07-22 12:54:09.463235] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:50.166 [2024-07-22 12:54:09.542085] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:50.166 [2024-07-22 12:54:09.542587] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:51.100 12:54:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:51.100 12:54:10 -- common/autotest_common.sh@852 -- # return 0 00:31:51.100 12:54:10 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=69585 00:31:51.100 12:54:10 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 69585 /var/tmp/spdk2.sock 00:31:51.100 12:54:10 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:31:51.100 12:54:10 -- common/autotest_common.sh@640 -- # local es=0 00:31:51.101 12:54:10 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 69585 /var/tmp/spdk2.sock 00:31:51.101 12:54:10 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:31:51.101 12:54:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:31:51.101 12:54:10 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:31:51.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:31:51.101 12:54:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:31:51.101 12:54:10 -- common/autotest_common.sh@643 -- # waitforlisten 69585 /var/tmp/spdk2.sock 00:31:51.101 12:54:10 -- common/autotest_common.sh@819 -- # '[' -z 69585 ']' 00:31:51.101 12:54:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:31:51.101 12:54:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:51.101 12:54:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:31:51.101 12:54:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:51.101 12:54:10 -- common/autotest_common.sh@10 -- # set +x 00:31:51.101 [2024-07-22 12:54:10.340950] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:31:51.101 [2024-07-22 12:54:10.341042] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69585 ] 00:31:51.101 [2024-07-22 12:54:10.482131] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 69558 has claimed it. 00:31:51.101 [2024-07-22 12:54:10.486232] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:31:51.667 ERROR: process (pid: 69585) is no longer running 00:31:51.667 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (69585) - No such process 00:31:51.667 12:54:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:51.667 12:54:11 -- common/autotest_common.sh@852 -- # return 1 00:31:51.667 12:54:11 -- common/autotest_common.sh@643 -- # es=1 00:31:51.667 12:54:11 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:31:51.667 12:54:11 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:31:51.667 12:54:11 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:31:51.667 12:54:11 -- event/cpu_locks.sh@122 -- # locks_exist 69558 00:31:51.667 12:54:11 -- event/cpu_locks.sh@22 -- # lslocks -p 69558 00:31:51.667 12:54:11 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:31:52.234 12:54:11 -- event/cpu_locks.sh@124 -- # killprocess 69558 00:31:52.234 12:54:11 -- common/autotest_common.sh@926 -- # '[' -z 69558 ']' 00:31:52.234 12:54:11 -- common/autotest_common.sh@930 -- # kill -0 69558 00:31:52.234 12:54:11 -- common/autotest_common.sh@931 -- # uname 00:31:52.234 12:54:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:52.234 12:54:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69558 00:31:52.234 killing process with pid 69558 00:31:52.234 12:54:11 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:31:52.234 12:54:11 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:31:52.234 12:54:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69558' 00:31:52.234 12:54:11 -- common/autotest_common.sh@945 -- # kill 69558 00:31:52.234 12:54:11 -- common/autotest_common.sh@950 -- # wait 69558 00:31:52.493 ************************************ 00:31:52.493 END TEST locking_app_on_locked_coremask 00:31:52.493 ************************************ 00:31:52.493 00:31:52.493 real 0m2.619s 00:31:52.493 user 0m2.979s 00:31:52.493 sys 0m0.629s 00:31:52.493 12:54:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:52.493 12:54:11 -- common/autotest_common.sh@10 -- # set +x 00:31:52.752 12:54:11 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:31:52.752 12:54:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:52.752 12:54:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:52.752 12:54:11 -- common/autotest_common.sh@10 -- # set +x 00:31:52.752 ************************************ 00:31:52.752 START TEST locking_overlapped_coremask 00:31:52.752 ************************************ 00:31:52.752 12:54:11 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask 00:31:52.752 12:54:11 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=69632 00:31:52.752 12:54:11 -- event/cpu_locks.sh@133 -- # waitforlisten 69632 /var/tmp/spdk.sock 00:31:52.752 12:54:11 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:31:52.752 12:54:11 -- common/autotest_common.sh@819 -- # '[' -z 69632 ']' 00:31:52.752 12:54:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:52.752 12:54:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:52.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:52.752 12:54:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:52.752 12:54:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:52.752 12:54:11 -- common/autotest_common.sh@10 -- # set +x 00:31:52.752 [2024-07-22 12:54:11.999475] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:31:52.752 [2024-07-22 12:54:11.999567] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69632 ] 00:31:52.752 [2024-07-22 12:54:12.136638] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:53.011 [2024-07-22 12:54:12.224127] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:53.011 [2024-07-22 12:54:12.224375] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:53.011 [2024-07-22 12:54:12.225120] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:53.011 [2024-07-22 12:54:12.225192] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:53.946 12:54:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:53.946 12:54:13 -- common/autotest_common.sh@852 -- # return 0 00:31:53.946 12:54:13 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=69662 00:31:53.946 12:54:13 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:31:53.946 12:54:13 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 69662 /var/tmp/spdk2.sock 00:31:53.946 12:54:13 -- common/autotest_common.sh@640 -- # local es=0 00:31:53.946 12:54:13 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 69662 /var/tmp/spdk2.sock 00:31:53.946 12:54:13 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:31:53.946 12:54:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:31:53.946 12:54:13 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:31:53.946 12:54:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:31:53.946 12:54:13 -- common/autotest_common.sh@643 -- # waitforlisten 69662 /var/tmp/spdk2.sock 00:31:53.946 12:54:13 -- common/autotest_common.sh@819 -- # '[' -z 69662 ']' 00:31:53.946 12:54:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:31:53.946 12:54:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:53.946 12:54:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:31:53.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:31:53.946 12:54:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:53.946 12:54:13 -- common/autotest_common.sh@10 -- # set +x 00:31:53.946 [2024-07-22 12:54:13.102986] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:31:53.946 [2024-07-22 12:54:13.103147] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69662 ] 00:31:53.946 [2024-07-22 12:54:13.253779] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 69632 has claimed it. 00:31:53.946 [2024-07-22 12:54:13.254258] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:31:54.513 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (69662) - No such process 00:31:54.513 ERROR: process (pid: 69662) is no longer running 00:31:54.513 12:54:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:54.513 12:54:13 -- common/autotest_common.sh@852 -- # return 1 00:31:54.513 12:54:13 -- common/autotest_common.sh@643 -- # es=1 00:31:54.513 12:54:13 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:31:54.513 12:54:13 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:31:54.513 12:54:13 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:31:54.513 12:54:13 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:31:54.513 12:54:13 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:31:54.513 12:54:13 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:31:54.514 12:54:13 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:31:54.514 12:54:13 -- event/cpu_locks.sh@141 -- # killprocess 69632 00:31:54.514 12:54:13 -- common/autotest_common.sh@926 -- # '[' -z 69632 ']' 00:31:54.514 12:54:13 -- common/autotest_common.sh@930 -- # kill -0 69632 00:31:54.514 12:54:13 -- common/autotest_common.sh@931 -- # uname 00:31:54.514 12:54:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:54.514 12:54:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69632 00:31:54.514 killing process with pid 69632 00:31:54.514 12:54:13 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:31:54.514 12:54:13 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:31:54.514 12:54:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69632' 00:31:54.514 12:54:13 -- common/autotest_common.sh@945 -- # kill 69632 00:31:54.514 12:54:13 -- common/autotest_common.sh@950 -- # wait 69632 00:31:55.080 00:31:55.080 real 0m2.277s 00:31:55.080 user 0m6.375s 00:31:55.080 sys 0m0.513s 00:31:55.080 12:54:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:55.080 ************************************ 00:31:55.080 END TEST locking_overlapped_coremask 00:31:55.080 ************************************ 00:31:55.080 12:54:14 -- common/autotest_common.sh@10 -- # set +x 00:31:55.080 12:54:14 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:31:55.080 12:54:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:55.080 12:54:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:55.080 12:54:14 -- common/autotest_common.sh@10 -- # set +x 00:31:55.080 ************************************ 00:31:55.080 START TEST locking_overlapped_coremask_via_rpc 00:31:55.080 ************************************ 00:31:55.080 12:54:14 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask_via_rpc 00:31:55.080 12:54:14 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=69708 00:31:55.080 12:54:14 -- event/cpu_locks.sh@149 -- # waitforlisten 69708 /var/tmp/spdk.sock 00:31:55.080 12:54:14 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:31:55.080 12:54:14 -- common/autotest_common.sh@819 -- # '[' -z 69708 ']' 00:31:55.080 12:54:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:55.080 12:54:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:55.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:55.080 12:54:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:55.080 12:54:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:55.080 12:54:14 -- common/autotest_common.sh@10 -- # set +x 00:31:55.080 [2024-07-22 12:54:14.338453] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:31:55.081 [2024-07-22 12:54:14.338596] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69708 ] 00:31:55.081 [2024-07-22 12:54:14.476054] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:31:55.081 [2024-07-22 12:54:14.476104] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:55.339 [2024-07-22 12:54:14.573193] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:55.339 [2024-07-22 12:54:14.573493] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:55.339 [2024-07-22 12:54:14.573722] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:55.339 [2024-07-22 12:54:14.573727] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:55.905 12:54:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:55.905 12:54:15 -- common/autotest_common.sh@852 -- # return 0 00:31:55.905 12:54:15 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=69738 00:31:55.905 12:54:15 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:31:55.905 12:54:15 -- event/cpu_locks.sh@153 -- # waitforlisten 69738 /var/tmp/spdk2.sock 00:31:55.905 12:54:15 -- common/autotest_common.sh@819 -- # '[' -z 69738 ']' 00:31:55.905 12:54:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:31:55.905 12:54:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:55.905 12:54:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:31:55.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:31:55.905 12:54:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:55.905 12:54:15 -- common/autotest_common.sh@10 -- # set +x 00:31:56.164 [2024-07-22 12:54:15.370927] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:31:56.164 [2024-07-22 12:54:15.371023] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69738 ] 00:31:56.164 [2024-07-22 12:54:15.515777] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:31:56.164 [2024-07-22 12:54:15.519161] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:56.422 [2024-07-22 12:54:15.690484] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:56.422 [2024-07-22 12:54:15.691055] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:31:56.422 [2024-07-22 12:54:15.691248] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:31:56.422 [2024-07-22 12:54:15.691355] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:56.988 12:54:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:56.988 12:54:16 -- common/autotest_common.sh@852 -- # return 0 00:31:56.988 12:54:16 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:31:56.988 12:54:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:56.988 12:54:16 -- common/autotest_common.sh@10 -- # set +x 00:31:57.246 12:54:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:57.246 12:54:16 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:31:57.246 12:54:16 -- common/autotest_common.sh@640 -- # local es=0 00:31:57.246 12:54:16 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:31:57.246 12:54:16 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:31:57.246 12:54:16 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:31:57.246 12:54:16 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:31:57.246 12:54:16 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:31:57.246 12:54:16 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:31:57.246 12:54:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:57.246 12:54:16 -- common/autotest_common.sh@10 -- # set +x 00:31:57.246 [2024-07-22 12:54:16.427267] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 69708 has claimed it. 00:31:57.246 2024/07/22 12:54:16 error on JSON-RPC call, method: framework_enable_cpumask_locks, params: map[], err: error received for framework_enable_cpumask_locks method, err: Code=-32603 Msg=Failed to claim CPU core: 2 00:31:57.246 request: 00:31:57.246 { 00:31:57.246 "method": "framework_enable_cpumask_locks", 00:31:57.246 "params": {} 00:31:57.246 } 00:31:57.246 Got JSON-RPC error response 00:31:57.246 GoRPCClient: error on JSON-RPC call 00:31:57.246 12:54:16 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:31:57.246 12:54:16 -- common/autotest_common.sh@643 -- # es=1 00:31:57.246 12:54:16 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:31:57.246 12:54:16 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:31:57.246 12:54:16 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:31:57.246 12:54:16 -- event/cpu_locks.sh@158 -- # waitforlisten 69708 /var/tmp/spdk.sock 00:31:57.246 12:54:16 -- common/autotest_common.sh@819 -- # '[' -z 69708 ']' 00:31:57.246 12:54:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:57.246 12:54:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:57.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:57.246 12:54:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:57.246 12:54:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:57.246 12:54:16 -- common/autotest_common.sh@10 -- # set +x 00:31:57.507 12:54:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:57.507 12:54:16 -- common/autotest_common.sh@852 -- # return 0 00:31:57.507 12:54:16 -- event/cpu_locks.sh@159 -- # waitforlisten 69738 /var/tmp/spdk2.sock 00:31:57.507 12:54:16 -- common/autotest_common.sh@819 -- # '[' -z 69738 ']' 00:31:57.507 12:54:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:31:57.507 12:54:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:57.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:31:57.507 12:54:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:31:57.507 12:54:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:57.507 12:54:16 -- common/autotest_common.sh@10 -- # set +x 00:31:57.765 12:54:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:57.765 12:54:16 -- common/autotest_common.sh@852 -- # return 0 00:31:57.765 12:54:16 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:31:57.765 12:54:16 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:31:57.765 12:54:16 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:31:57.765 12:54:16 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:31:57.766 00:31:57.766 real 0m2.720s 00:31:57.766 user 0m1.410s 00:31:57.766 sys 0m0.238s 00:31:57.766 12:54:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:57.766 12:54:16 -- common/autotest_common.sh@10 -- # set +x 00:31:57.766 ************************************ 00:31:57.766 END TEST locking_overlapped_coremask_via_rpc 00:31:57.766 ************************************ 00:31:57.766 12:54:17 -- event/cpu_locks.sh@174 -- # cleanup 00:31:57.766 12:54:17 -- event/cpu_locks.sh@15 -- # [[ -z 69708 ]] 00:31:57.766 12:54:17 -- event/cpu_locks.sh@15 -- # killprocess 69708 00:31:57.766 12:54:17 -- common/autotest_common.sh@926 -- # '[' -z 69708 ']' 00:31:57.766 12:54:17 -- common/autotest_common.sh@930 -- # kill -0 69708 00:31:57.766 12:54:17 -- common/autotest_common.sh@931 -- # uname 00:31:57.766 12:54:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:57.766 12:54:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69708 00:31:57.766 12:54:17 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:31:57.766 12:54:17 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:31:57.766 12:54:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69708' 00:31:57.766 killing process with pid 69708 00:31:57.766 12:54:17 -- common/autotest_common.sh@945 -- # kill 69708 00:31:57.766 12:54:17 -- common/autotest_common.sh@950 -- # wait 69708 00:31:58.331 12:54:17 -- event/cpu_locks.sh@16 -- # [[ -z 69738 ]] 00:31:58.331 12:54:17 -- event/cpu_locks.sh@16 -- # killprocess 69738 00:31:58.331 12:54:17 -- common/autotest_common.sh@926 -- # '[' -z 69738 ']' 00:31:58.331 12:54:17 -- common/autotest_common.sh@930 -- # kill -0 69738 00:31:58.331 12:54:17 -- common/autotest_common.sh@931 -- # uname 00:31:58.331 12:54:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:58.331 12:54:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69738 00:31:58.331 12:54:17 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:31:58.331 12:54:17 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:31:58.331 killing process with pid 69738 00:31:58.331 12:54:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69738' 00:31:58.331 12:54:17 -- common/autotest_common.sh@945 -- # kill 69738 00:31:58.331 12:54:17 -- common/autotest_common.sh@950 -- # wait 69738 00:31:58.589 12:54:17 -- event/cpu_locks.sh@18 -- # rm -f 00:31:58.589 12:54:17 -- event/cpu_locks.sh@1 -- # cleanup 00:31:58.589 12:54:17 -- event/cpu_locks.sh@15 -- # [[ -z 69708 ]] 00:31:58.589 12:54:17 -- event/cpu_locks.sh@15 -- # killprocess 69708 00:31:58.589 12:54:17 -- common/autotest_common.sh@926 -- # '[' -z 69708 ']' 00:31:58.589 12:54:17 -- common/autotest_common.sh@930 -- # kill -0 69708 00:31:58.589 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (69708) - No such process 00:31:58.589 Process with pid 69708 is not found 00:31:58.589 12:54:17 -- common/autotest_common.sh@953 -- # echo 'Process with pid 69708 is not found' 00:31:58.589 12:54:17 -- event/cpu_locks.sh@16 -- # [[ -z 69738 ]] 00:31:58.589 12:54:17 -- event/cpu_locks.sh@16 -- # killprocess 69738 00:31:58.589 12:54:17 -- common/autotest_common.sh@926 -- # '[' -z 69738 ']' 00:31:58.589 12:54:17 -- common/autotest_common.sh@930 -- # kill -0 69738 00:31:58.589 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (69738) - No such process 00:31:58.589 Process with pid 69738 is not found 00:31:58.589 12:54:17 -- common/autotest_common.sh@953 -- # echo 'Process with pid 69738 is not found' 00:31:58.589 12:54:17 -- event/cpu_locks.sh@18 -- # rm -f 00:31:58.589 00:31:58.589 real 0m21.164s 00:31:58.589 user 0m37.143s 00:31:58.589 sys 0m5.770s 00:31:58.589 12:54:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:58.589 12:54:17 -- common/autotest_common.sh@10 -- # set +x 00:31:58.589 ************************************ 00:31:58.589 END TEST cpu_locks 00:31:58.589 ************************************ 00:31:58.589 00:31:58.589 real 0m47.829s 00:31:58.589 user 1m32.560s 00:31:58.589 sys 0m9.681s 00:31:58.589 12:54:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:58.589 12:54:17 -- common/autotest_common.sh@10 -- # set +x 00:31:58.589 ************************************ 00:31:58.589 END TEST event 00:31:58.589 ************************************ 00:31:58.589 12:54:17 -- spdk/autotest.sh@188 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:31:58.590 12:54:17 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:58.590 12:54:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:58.590 12:54:17 -- common/autotest_common.sh@10 -- # set +x 00:31:58.590 ************************************ 00:31:58.590 START TEST thread 00:31:58.590 ************************************ 00:31:58.590 12:54:17 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:31:58.848 * Looking for test storage... 00:31:58.848 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:31:58.848 12:54:18 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:31:58.848 12:54:18 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:31:58.848 12:54:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:58.848 12:54:18 -- common/autotest_common.sh@10 -- # set +x 00:31:58.848 ************************************ 00:31:58.848 START TEST thread_poller_perf 00:31:58.848 ************************************ 00:31:58.848 12:54:18 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:31:58.848 [2024-07-22 12:54:18.059093] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:31:58.848 [2024-07-22 12:54:18.059220] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69889 ] 00:31:58.848 [2024-07-22 12:54:18.196087] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:59.107 [2024-07-22 12:54:18.281291] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:59.107 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:32:00.042 ====================================== 00:32:00.042 busy:2214656024 (cyc) 00:32:00.042 total_run_count: 293000 00:32:00.042 tsc_hz: 2200000000 (cyc) 00:32:00.042 ====================================== 00:32:00.042 poller_cost: 7558 (cyc), 3435 (nsec) 00:32:00.042 00:32:00.042 real 0m1.317s 00:32:00.042 user 0m1.157s 00:32:00.042 sys 0m0.052s 00:32:00.042 12:54:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:00.042 12:54:19 -- common/autotest_common.sh@10 -- # set +x 00:32:00.042 ************************************ 00:32:00.042 END TEST thread_poller_perf 00:32:00.042 ************************************ 00:32:00.042 12:54:19 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:32:00.042 12:54:19 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:32:00.042 12:54:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:00.042 12:54:19 -- common/autotest_common.sh@10 -- # set +x 00:32:00.042 ************************************ 00:32:00.042 START TEST thread_poller_perf 00:32:00.042 ************************************ 00:32:00.042 12:54:19 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:32:00.042 [2024-07-22 12:54:19.426508] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:32:00.042 [2024-07-22 12:54:19.426587] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69921 ] 00:32:00.300 [2024-07-22 12:54:19.558132] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:00.300 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:32:00.300 [2024-07-22 12:54:19.642461] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:01.672 ====================================== 00:32:01.672 busy:2202928568 (cyc) 00:32:01.672 total_run_count: 4186000 00:32:01.672 tsc_hz: 2200000000 (cyc) 00:32:01.672 ====================================== 00:32:01.672 poller_cost: 526 (cyc), 239 (nsec) 00:32:01.672 ************************************ 00:32:01.672 END TEST thread_poller_perf 00:32:01.672 ************************************ 00:32:01.672 00:32:01.672 real 0m1.310s 00:32:01.672 user 0m1.151s 00:32:01.672 sys 0m0.052s 00:32:01.672 12:54:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:01.672 12:54:20 -- common/autotest_common.sh@10 -- # set +x 00:32:01.672 12:54:20 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:32:01.672 ************************************ 00:32:01.672 END TEST thread 00:32:01.672 ************************************ 00:32:01.672 00:32:01.672 real 0m2.792s 00:32:01.672 user 0m2.382s 00:32:01.672 sys 0m0.194s 00:32:01.672 12:54:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:01.672 12:54:20 -- common/autotest_common.sh@10 -- # set +x 00:32:01.672 12:54:20 -- spdk/autotest.sh@189 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:32:01.672 12:54:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:32:01.672 12:54:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:01.672 12:54:20 -- common/autotest_common.sh@10 -- # set +x 00:32:01.672 ************************************ 00:32:01.672 START TEST accel 00:32:01.672 ************************************ 00:32:01.672 12:54:20 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:32:01.672 * Looking for test storage... 00:32:01.672 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:32:01.672 12:54:20 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:32:01.672 12:54:20 -- accel/accel.sh@74 -- # get_expected_opcs 00:32:01.672 12:54:20 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:32:01.672 12:54:20 -- accel/accel.sh@59 -- # spdk_tgt_pid=70000 00:32:01.672 12:54:20 -- accel/accel.sh@60 -- # waitforlisten 70000 00:32:01.672 12:54:20 -- common/autotest_common.sh@819 -- # '[' -z 70000 ']' 00:32:01.672 12:54:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:01.672 12:54:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:01.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:01.672 12:54:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:01.672 12:54:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:01.672 12:54:20 -- common/autotest_common.sh@10 -- # set +x 00:32:01.672 12:54:20 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:32:01.672 12:54:20 -- accel/accel.sh@58 -- # build_accel_config 00:32:01.672 12:54:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:32:01.672 12:54:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:32:01.672 12:54:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:32:01.672 12:54:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:32:01.672 12:54:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:32:01.672 12:54:20 -- accel/accel.sh@41 -- # local IFS=, 00:32:01.672 12:54:20 -- accel/accel.sh@42 -- # jq -r . 00:32:01.673 [2024-07-22 12:54:20.939494] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:32:01.673 [2024-07-22 12:54:20.939609] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70000 ] 00:32:01.673 [2024-07-22 12:54:21.076673] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:01.930 [2024-07-22 12:54:21.170620] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:32:01.930 [2024-07-22 12:54:21.170811] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:02.495 12:54:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:02.495 12:54:21 -- common/autotest_common.sh@852 -- # return 0 00:32:02.495 12:54:21 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:32:02.495 12:54:21 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:32:02.495 12:54:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:02.495 12:54:21 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:32:02.495 12:54:21 -- common/autotest_common.sh@10 -- # set +x 00:32:02.495 12:54:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:02.753 12:54:21 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:32:02.753 12:54:21 -- accel/accel.sh@64 -- # IFS== 00:32:02.753 12:54:21 -- accel/accel.sh@64 -- # read -r opc module 00:32:02.753 12:54:21 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:32:02.753 12:54:21 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:32:02.753 12:54:21 -- accel/accel.sh@64 -- # IFS== 00:32:02.753 12:54:21 -- accel/accel.sh@64 -- # read -r opc module 00:32:02.753 12:54:21 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:32:02.753 12:54:21 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:32:02.753 12:54:21 -- accel/accel.sh@64 -- # IFS== 00:32:02.753 12:54:21 -- accel/accel.sh@64 -- # read -r opc module 00:32:02.753 12:54:21 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:32:02.753 12:54:21 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:32:02.753 12:54:21 -- accel/accel.sh@64 -- # IFS== 00:32:02.753 12:54:21 -- accel/accel.sh@64 -- # read -r opc module 00:32:02.753 12:54:21 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:32:02.753 12:54:21 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:32:02.753 12:54:21 -- accel/accel.sh@64 -- # IFS== 00:32:02.753 12:54:21 -- accel/accel.sh@64 -- # read -r opc module 00:32:02.753 12:54:21 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:32:02.753 12:54:21 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:32:02.753 12:54:21 -- accel/accel.sh@64 -- # IFS== 00:32:02.753 12:54:21 -- accel/accel.sh@64 -- # read -r opc module 00:32:02.753 12:54:21 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:32:02.753 12:54:21 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:32:02.753 12:54:21 -- accel/accel.sh@64 -- # IFS== 00:32:02.753 12:54:21 -- accel/accel.sh@64 -- # read -r opc module 00:32:02.753 12:54:21 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:32:02.753 12:54:21 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:32:02.753 12:54:21 -- accel/accel.sh@64 -- # IFS== 00:32:02.753 12:54:21 -- accel/accel.sh@64 -- # read -r opc module 00:32:02.753 12:54:21 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:32:02.753 12:54:21 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:32:02.753 12:54:21 -- accel/accel.sh@64 -- # IFS== 00:32:02.753 12:54:21 -- accel/accel.sh@64 -- # read -r opc module 00:32:02.753 12:54:21 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:32:02.753 12:54:21 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:32:02.753 12:54:21 -- accel/accel.sh@64 -- # IFS== 00:32:02.753 12:54:21 -- accel/accel.sh@64 -- # read -r opc module 00:32:02.753 12:54:21 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:32:02.753 12:54:21 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:32:02.753 12:54:21 -- accel/accel.sh@64 -- # IFS== 00:32:02.753 12:54:21 -- accel/accel.sh@64 -- # read -r opc module 00:32:02.753 12:54:21 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:32:02.753 12:54:21 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:32:02.753 12:54:21 -- accel/accel.sh@64 -- # IFS== 00:32:02.753 12:54:21 -- accel/accel.sh@64 -- # read -r opc module 00:32:02.753 12:54:21 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:32:02.753 12:54:21 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:32:02.753 12:54:21 -- accel/accel.sh@64 -- # IFS== 00:32:02.753 12:54:21 -- accel/accel.sh@64 -- # read -r opc module 00:32:02.753 12:54:21 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:32:02.753 12:54:21 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:32:02.753 12:54:21 -- accel/accel.sh@64 -- # IFS== 00:32:02.753 12:54:21 -- accel/accel.sh@64 -- # read -r opc module 00:32:02.753 12:54:21 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:32:02.753 12:54:21 -- accel/accel.sh@67 -- # killprocess 70000 00:32:02.753 12:54:21 -- common/autotest_common.sh@926 -- # '[' -z 70000 ']' 00:32:02.753 12:54:21 -- common/autotest_common.sh@930 -- # kill -0 70000 00:32:02.753 12:54:21 -- common/autotest_common.sh@931 -- # uname 00:32:02.753 12:54:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:02.753 12:54:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 70000 00:32:02.753 12:54:21 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:32:02.753 12:54:21 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:32:02.753 12:54:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 70000' 00:32:02.753 killing process with pid 70000 00:32:02.753 12:54:21 -- common/autotest_common.sh@945 -- # kill 70000 00:32:02.753 12:54:21 -- common/autotest_common.sh@950 -- # wait 70000 00:32:03.011 12:54:22 -- accel/accel.sh@68 -- # trap - ERR 00:32:03.011 12:54:22 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:32:03.011 12:54:22 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:32:03.011 12:54:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:03.011 12:54:22 -- common/autotest_common.sh@10 -- # set +x 00:32:03.011 12:54:22 -- common/autotest_common.sh@1104 -- # accel_perf -h 00:32:03.011 12:54:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:32:03.011 12:54:22 -- accel/accel.sh@12 -- # build_accel_config 00:32:03.011 12:54:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:32:03.011 12:54:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:32:03.011 12:54:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:32:03.011 12:54:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:32:03.011 12:54:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:32:03.011 12:54:22 -- accel/accel.sh@41 -- # local IFS=, 00:32:03.011 12:54:22 -- accel/accel.sh@42 -- # jq -r . 00:32:03.011 12:54:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:03.011 12:54:22 -- common/autotest_common.sh@10 -- # set +x 00:32:03.011 12:54:22 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:32:03.011 12:54:22 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:32:03.011 12:54:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:03.011 12:54:22 -- common/autotest_common.sh@10 -- # set +x 00:32:03.011 ************************************ 00:32:03.011 START TEST accel_missing_filename 00:32:03.011 ************************************ 00:32:03.011 12:54:22 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress 00:32:03.011 12:54:22 -- common/autotest_common.sh@640 -- # local es=0 00:32:03.011 12:54:22 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress 00:32:03.011 12:54:22 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:32:03.011 12:54:22 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:32:03.011 12:54:22 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:32:03.011 12:54:22 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:32:03.011 12:54:22 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress 00:32:03.011 12:54:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:32:03.011 12:54:22 -- accel/accel.sh@12 -- # build_accel_config 00:32:03.011 12:54:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:32:03.011 12:54:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:32:03.011 12:54:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:32:03.011 12:54:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:32:03.011 12:54:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:32:03.011 12:54:22 -- accel/accel.sh@41 -- # local IFS=, 00:32:03.011 12:54:22 -- accel/accel.sh@42 -- # jq -r . 00:32:03.270 [2024-07-22 12:54:22.447602] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:32:03.270 [2024-07-22 12:54:22.447703] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70064 ] 00:32:03.270 [2024-07-22 12:54:22.585051] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:03.270 [2024-07-22 12:54:22.676226] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:03.528 [2024-07-22 12:54:22.730345] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:32:03.528 [2024-07-22 12:54:22.806827] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:32:03.528 A filename is required. 00:32:03.528 ************************************ 00:32:03.528 END TEST accel_missing_filename 00:32:03.528 ************************************ 00:32:03.528 12:54:22 -- common/autotest_common.sh@643 -- # es=234 00:32:03.528 12:54:22 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:32:03.528 12:54:22 -- common/autotest_common.sh@652 -- # es=106 00:32:03.528 12:54:22 -- common/autotest_common.sh@653 -- # case "$es" in 00:32:03.528 12:54:22 -- common/autotest_common.sh@660 -- # es=1 00:32:03.528 12:54:22 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:32:03.528 00:32:03.528 real 0m0.459s 00:32:03.528 user 0m0.290s 00:32:03.528 sys 0m0.109s 00:32:03.528 12:54:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:03.528 12:54:22 -- common/autotest_common.sh@10 -- # set +x 00:32:03.528 12:54:22 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:32:03.528 12:54:22 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:32:03.528 12:54:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:03.528 12:54:22 -- common/autotest_common.sh@10 -- # set +x 00:32:03.528 ************************************ 00:32:03.528 START TEST accel_compress_verify 00:32:03.528 ************************************ 00:32:03.528 12:54:22 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:32:03.528 12:54:22 -- common/autotest_common.sh@640 -- # local es=0 00:32:03.528 12:54:22 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:32:03.528 12:54:22 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:32:03.528 12:54:22 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:32:03.528 12:54:22 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:32:03.528 12:54:22 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:32:03.528 12:54:22 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:32:03.528 12:54:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:32:03.528 12:54:22 -- accel/accel.sh@12 -- # build_accel_config 00:32:03.528 12:54:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:32:03.528 12:54:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:32:03.528 12:54:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:32:03.528 12:54:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:32:03.528 12:54:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:32:03.528 12:54:22 -- accel/accel.sh@41 -- # local IFS=, 00:32:03.528 12:54:22 -- accel/accel.sh@42 -- # jq -r . 00:32:03.786 [2024-07-22 12:54:22.954270] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:32:03.786 [2024-07-22 12:54:22.954358] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70094 ] 00:32:03.786 [2024-07-22 12:54:23.089101] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:03.786 [2024-07-22 12:54:23.184005] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:04.044 [2024-07-22 12:54:23.238490] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:32:04.044 [2024-07-22 12:54:23.313971] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:32:04.044 00:32:04.044 Compression does not support the verify option, aborting. 00:32:04.044 ************************************ 00:32:04.044 END TEST accel_compress_verify 00:32:04.044 ************************************ 00:32:04.044 12:54:23 -- common/autotest_common.sh@643 -- # es=161 00:32:04.044 12:54:23 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:32:04.044 12:54:23 -- common/autotest_common.sh@652 -- # es=33 00:32:04.044 12:54:23 -- common/autotest_common.sh@653 -- # case "$es" in 00:32:04.044 12:54:23 -- common/autotest_common.sh@660 -- # es=1 00:32:04.044 12:54:23 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:32:04.044 00:32:04.044 real 0m0.458s 00:32:04.044 user 0m0.295s 00:32:04.044 sys 0m0.110s 00:32:04.044 12:54:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:04.044 12:54:23 -- common/autotest_common.sh@10 -- # set +x 00:32:04.044 12:54:23 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:32:04.044 12:54:23 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:32:04.044 12:54:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:04.044 12:54:23 -- common/autotest_common.sh@10 -- # set +x 00:32:04.044 ************************************ 00:32:04.044 START TEST accel_wrong_workload 00:32:04.044 ************************************ 00:32:04.044 12:54:23 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w foobar 00:32:04.044 12:54:23 -- common/autotest_common.sh@640 -- # local es=0 00:32:04.044 12:54:23 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:32:04.044 12:54:23 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:32:04.044 12:54:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:32:04.044 12:54:23 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:32:04.044 12:54:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:32:04.044 12:54:23 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w foobar 00:32:04.044 12:54:23 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:32:04.044 12:54:23 -- accel/accel.sh@12 -- # build_accel_config 00:32:04.044 12:54:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:32:04.044 12:54:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:32:04.044 12:54:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:32:04.044 12:54:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:32:04.044 12:54:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:32:04.044 12:54:23 -- accel/accel.sh@41 -- # local IFS=, 00:32:04.044 12:54:23 -- accel/accel.sh@42 -- # jq -r . 00:32:04.044 Unsupported workload type: foobar 00:32:04.044 [2024-07-22 12:54:23.464093] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:32:04.304 accel_perf options: 00:32:04.304 [-h help message] 00:32:04.304 [-q queue depth per core] 00:32:04.304 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:32:04.304 [-T number of threads per core 00:32:04.304 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:32:04.304 [-t time in seconds] 00:32:04.304 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:32:04.304 [ dif_verify, , dif_generate, dif_generate_copy 00:32:04.304 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:32:04.304 [-l for compress/decompress workloads, name of uncompressed input file 00:32:04.304 [-S for crc32c workload, use this seed value (default 0) 00:32:04.304 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:32:04.304 [-f for fill workload, use this BYTE value (default 255) 00:32:04.304 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:32:04.304 [-y verify result if this switch is on] 00:32:04.304 [-a tasks to allocate per core (default: same value as -q)] 00:32:04.304 Can be used to spread operations across a wider range of memory. 00:32:04.304 12:54:23 -- common/autotest_common.sh@643 -- # es=1 00:32:04.304 12:54:23 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:32:04.304 12:54:23 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:32:04.304 ************************************ 00:32:04.304 END TEST accel_wrong_workload 00:32:04.304 ************************************ 00:32:04.304 12:54:23 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:32:04.304 00:32:04.304 real 0m0.032s 00:32:04.304 user 0m0.016s 00:32:04.304 sys 0m0.015s 00:32:04.304 12:54:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:04.304 12:54:23 -- common/autotest_common.sh@10 -- # set +x 00:32:04.304 12:54:23 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:32:04.304 12:54:23 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:32:04.304 12:54:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:04.304 12:54:23 -- common/autotest_common.sh@10 -- # set +x 00:32:04.304 ************************************ 00:32:04.304 START TEST accel_negative_buffers 00:32:04.304 ************************************ 00:32:04.304 12:54:23 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:32:04.304 12:54:23 -- common/autotest_common.sh@640 -- # local es=0 00:32:04.304 12:54:23 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:32:04.304 12:54:23 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:32:04.304 12:54:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:32:04.304 12:54:23 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:32:04.304 12:54:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:32:04.304 12:54:23 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w xor -y -x -1 00:32:04.304 12:54:23 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:32:04.304 12:54:23 -- accel/accel.sh@12 -- # build_accel_config 00:32:04.304 12:54:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:32:04.304 12:54:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:32:04.304 12:54:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:32:04.304 12:54:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:32:04.304 12:54:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:32:04.304 12:54:23 -- accel/accel.sh@41 -- # local IFS=, 00:32:04.304 12:54:23 -- accel/accel.sh@42 -- # jq -r . 00:32:04.304 -x option must be non-negative. 00:32:04.304 [2024-07-22 12:54:23.539805] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:32:04.304 accel_perf options: 00:32:04.304 [-h help message] 00:32:04.304 [-q queue depth per core] 00:32:04.304 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:32:04.304 [-T number of threads per core 00:32:04.304 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:32:04.304 [-t time in seconds] 00:32:04.304 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:32:04.304 [ dif_verify, , dif_generate, dif_generate_copy 00:32:04.304 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:32:04.304 [-l for compress/decompress workloads, name of uncompressed input file 00:32:04.304 [-S for crc32c workload, use this seed value (default 0) 00:32:04.304 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:32:04.304 [-f for fill workload, use this BYTE value (default 255) 00:32:04.304 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:32:04.304 [-y verify result if this switch is on] 00:32:04.304 [-a tasks to allocate per core (default: same value as -q)] 00:32:04.304 Can be used to spread operations across a wider range of memory. 00:32:04.304 12:54:23 -- common/autotest_common.sh@643 -- # es=1 00:32:04.304 12:54:23 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:32:04.304 12:54:23 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:32:04.304 12:54:23 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:32:04.304 00:32:04.304 real 0m0.032s 00:32:04.304 user 0m0.013s 00:32:04.304 sys 0m0.018s 00:32:04.304 12:54:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:04.304 ************************************ 00:32:04.304 END TEST accel_negative_buffers 00:32:04.304 ************************************ 00:32:04.304 12:54:23 -- common/autotest_common.sh@10 -- # set +x 00:32:04.304 12:54:23 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:32:04.304 12:54:23 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:32:04.304 12:54:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:04.304 12:54:23 -- common/autotest_common.sh@10 -- # set +x 00:32:04.304 ************************************ 00:32:04.304 START TEST accel_crc32c 00:32:04.304 ************************************ 00:32:04.304 12:54:23 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -S 32 -y 00:32:04.304 12:54:23 -- accel/accel.sh@16 -- # local accel_opc 00:32:04.304 12:54:23 -- accel/accel.sh@17 -- # local accel_module 00:32:04.304 12:54:23 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:32:04.304 12:54:23 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:32:04.304 12:54:23 -- accel/accel.sh@12 -- # build_accel_config 00:32:04.304 12:54:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:32:04.304 12:54:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:32:04.304 12:54:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:32:04.304 12:54:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:32:04.305 12:54:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:32:04.305 12:54:23 -- accel/accel.sh@41 -- # local IFS=, 00:32:04.305 12:54:23 -- accel/accel.sh@42 -- # jq -r . 00:32:04.305 [2024-07-22 12:54:23.619687] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:32:04.305 [2024-07-22 12:54:23.619808] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70153 ] 00:32:04.563 [2024-07-22 12:54:23.757160] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:04.563 [2024-07-22 12:54:23.857903] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:05.935 12:54:25 -- accel/accel.sh@18 -- # out=' 00:32:05.935 SPDK Configuration: 00:32:05.935 Core mask: 0x1 00:32:05.935 00:32:05.935 Accel Perf Configuration: 00:32:05.935 Workload Type: crc32c 00:32:05.935 CRC-32C seed: 32 00:32:05.935 Transfer size: 4096 bytes 00:32:05.935 Vector count 1 00:32:05.935 Module: software 00:32:05.935 Queue depth: 32 00:32:05.935 Allocate depth: 32 00:32:05.935 # threads/core: 1 00:32:05.935 Run time: 1 seconds 00:32:05.935 Verify: Yes 00:32:05.935 00:32:05.935 Running for 1 seconds... 00:32:05.935 00:32:05.935 Core,Thread Transfers Bandwidth Failed Miscompares 00:32:05.936 ------------------------------------------------------------------------------------ 00:32:05.936 0,0 436864/s 1706 MiB/s 0 0 00:32:05.936 ==================================================================================== 00:32:05.936 Total 436864/s 1706 MiB/s 0 0' 00:32:05.936 12:54:25 -- accel/accel.sh@20 -- # IFS=: 00:32:05.936 12:54:25 -- accel/accel.sh@20 -- # read -r var val 00:32:05.936 12:54:25 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:32:05.936 12:54:25 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:32:05.936 12:54:25 -- accel/accel.sh@12 -- # build_accel_config 00:32:05.936 12:54:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:32:05.936 12:54:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:32:05.936 12:54:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:32:05.936 12:54:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:32:05.936 12:54:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:32:05.936 12:54:25 -- accel/accel.sh@41 -- # local IFS=, 00:32:05.936 12:54:25 -- accel/accel.sh@42 -- # jq -r . 00:32:05.936 [2024-07-22 12:54:25.097022] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:32:05.936 [2024-07-22 12:54:25.097166] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70172 ] 00:32:05.936 [2024-07-22 12:54:25.239343] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:05.936 [2024-07-22 12:54:25.334972] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:06.194 12:54:25 -- accel/accel.sh@21 -- # val= 00:32:06.194 12:54:25 -- accel/accel.sh@22 -- # case "$var" in 00:32:06.194 12:54:25 -- accel/accel.sh@20 -- # IFS=: 00:32:06.194 12:54:25 -- accel/accel.sh@20 -- # read -r var val 00:32:06.194 12:54:25 -- accel/accel.sh@21 -- # val= 00:32:06.194 12:54:25 -- accel/accel.sh@22 -- # case "$var" in 00:32:06.194 12:54:25 -- accel/accel.sh@20 -- # IFS=: 00:32:06.194 12:54:25 -- accel/accel.sh@20 -- # read -r var val 00:32:06.194 12:54:25 -- accel/accel.sh@21 -- # val=0x1 00:32:06.194 12:54:25 -- accel/accel.sh@22 -- # case "$var" in 00:32:06.194 12:54:25 -- accel/accel.sh@20 -- # IFS=: 00:32:06.194 12:54:25 -- accel/accel.sh@20 -- # read -r var val 00:32:06.194 12:54:25 -- accel/accel.sh@21 -- # val= 00:32:06.194 12:54:25 -- accel/accel.sh@22 -- # case "$var" in 00:32:06.194 12:54:25 -- accel/accel.sh@20 -- # IFS=: 00:32:06.194 12:54:25 -- accel/accel.sh@20 -- # read -r var val 00:32:06.194 12:54:25 -- accel/accel.sh@21 -- # val= 00:32:06.194 12:54:25 -- accel/accel.sh@22 -- # case "$var" in 00:32:06.194 12:54:25 -- accel/accel.sh@20 -- # IFS=: 00:32:06.194 12:54:25 -- accel/accel.sh@20 -- # read -r var val 00:32:06.194 12:54:25 -- accel/accel.sh@21 -- # val=crc32c 00:32:06.194 12:54:25 -- accel/accel.sh@22 -- # case "$var" in 00:32:06.194 12:54:25 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:32:06.194 12:54:25 -- accel/accel.sh@20 -- # IFS=: 00:32:06.194 12:54:25 -- accel/accel.sh@20 -- # read -r var val 00:32:06.194 12:54:25 -- accel/accel.sh@21 -- # val=32 00:32:06.194 12:54:25 -- accel/accel.sh@22 -- # case "$var" in 00:32:06.194 12:54:25 -- accel/accel.sh@20 -- # IFS=: 00:32:06.194 12:54:25 -- accel/accel.sh@20 -- # read -r var val 00:32:06.194 12:54:25 -- accel/accel.sh@21 -- # val='4096 bytes' 00:32:06.194 12:54:25 -- accel/accel.sh@22 -- # case "$var" in 00:32:06.194 12:54:25 -- accel/accel.sh@20 -- # IFS=: 00:32:06.194 12:54:25 -- accel/accel.sh@20 -- # read -r var val 00:32:06.194 12:54:25 -- accel/accel.sh@21 -- # val= 00:32:06.194 12:54:25 -- accel/accel.sh@22 -- # case "$var" in 00:32:06.194 12:54:25 -- accel/accel.sh@20 -- # IFS=: 00:32:06.194 12:54:25 -- accel/accel.sh@20 -- # read -r var val 00:32:06.194 12:54:25 -- accel/accel.sh@21 -- # val=software 00:32:06.194 12:54:25 -- accel/accel.sh@22 -- # case "$var" in 00:32:06.194 12:54:25 -- accel/accel.sh@23 -- # accel_module=software 00:32:06.194 12:54:25 -- accel/accel.sh@20 -- # IFS=: 00:32:06.194 12:54:25 -- accel/accel.sh@20 -- # read -r var val 00:32:06.194 12:54:25 -- accel/accel.sh@21 -- # val=32 00:32:06.194 12:54:25 -- accel/accel.sh@22 -- # case "$var" in 00:32:06.194 12:54:25 -- accel/accel.sh@20 -- # IFS=: 00:32:06.194 12:54:25 -- accel/accel.sh@20 -- # read -r var val 00:32:06.194 12:54:25 -- accel/accel.sh@21 -- # val=32 00:32:06.194 12:54:25 -- accel/accel.sh@22 -- # case "$var" in 00:32:06.194 12:54:25 -- accel/accel.sh@20 -- # IFS=: 00:32:06.194 12:54:25 -- accel/accel.sh@20 -- # read -r var val 00:32:06.194 12:54:25 -- accel/accel.sh@21 -- # val=1 00:32:06.194 12:54:25 -- accel/accel.sh@22 -- # case "$var" in 00:32:06.194 12:54:25 -- accel/accel.sh@20 -- # IFS=: 00:32:06.194 12:54:25 -- accel/accel.sh@20 -- # read -r var val 00:32:06.194 12:54:25 -- accel/accel.sh@21 -- # val='1 seconds' 00:32:06.194 12:54:25 -- accel/accel.sh@22 -- # case "$var" in 00:32:06.194 12:54:25 -- accel/accel.sh@20 -- # IFS=: 00:32:06.194 12:54:25 -- accel/accel.sh@20 -- # read -r var val 00:32:06.194 12:54:25 -- accel/accel.sh@21 -- # val=Yes 00:32:06.194 12:54:25 -- accel/accel.sh@22 -- # case "$var" in 00:32:06.194 12:54:25 -- accel/accel.sh@20 -- # IFS=: 00:32:06.194 12:54:25 -- accel/accel.sh@20 -- # read -r var val 00:32:06.194 12:54:25 -- accel/accel.sh@21 -- # val= 00:32:06.194 12:54:25 -- accel/accel.sh@22 -- # case "$var" in 00:32:06.194 12:54:25 -- accel/accel.sh@20 -- # IFS=: 00:32:06.194 12:54:25 -- accel/accel.sh@20 -- # read -r var val 00:32:06.194 12:54:25 -- accel/accel.sh@21 -- # val= 00:32:06.194 12:54:25 -- accel/accel.sh@22 -- # case "$var" in 00:32:06.194 12:54:25 -- accel/accel.sh@20 -- # IFS=: 00:32:06.194 12:54:25 -- accel/accel.sh@20 -- # read -r var val 00:32:07.568 12:54:26 -- accel/accel.sh@21 -- # val= 00:32:07.568 12:54:26 -- accel/accel.sh@22 -- # case "$var" in 00:32:07.568 12:54:26 -- accel/accel.sh@20 -- # IFS=: 00:32:07.568 12:54:26 -- accel/accel.sh@20 -- # read -r var val 00:32:07.568 12:54:26 -- accel/accel.sh@21 -- # val= 00:32:07.568 12:54:26 -- accel/accel.sh@22 -- # case "$var" in 00:32:07.568 12:54:26 -- accel/accel.sh@20 -- # IFS=: 00:32:07.568 12:54:26 -- accel/accel.sh@20 -- # read -r var val 00:32:07.568 12:54:26 -- accel/accel.sh@21 -- # val= 00:32:07.568 12:54:26 -- accel/accel.sh@22 -- # case "$var" in 00:32:07.568 12:54:26 -- accel/accel.sh@20 -- # IFS=: 00:32:07.568 12:54:26 -- accel/accel.sh@20 -- # read -r var val 00:32:07.568 12:54:26 -- accel/accel.sh@21 -- # val= 00:32:07.568 12:54:26 -- accel/accel.sh@22 -- # case "$var" in 00:32:07.568 12:54:26 -- accel/accel.sh@20 -- # IFS=: 00:32:07.568 12:54:26 -- accel/accel.sh@20 -- # read -r var val 00:32:07.568 12:54:26 -- accel/accel.sh@21 -- # val= 00:32:07.568 12:54:26 -- accel/accel.sh@22 -- # case "$var" in 00:32:07.568 12:54:26 -- accel/accel.sh@20 -- # IFS=: 00:32:07.568 12:54:26 -- accel/accel.sh@20 -- # read -r var val 00:32:07.568 12:54:26 -- accel/accel.sh@21 -- # val= 00:32:07.568 12:54:26 -- accel/accel.sh@22 -- # case "$var" in 00:32:07.568 12:54:26 -- accel/accel.sh@20 -- # IFS=: 00:32:07.568 12:54:26 -- accel/accel.sh@20 -- # read -r var val 00:32:07.568 12:54:26 -- accel/accel.sh@28 -- # [[ -n software ]] 00:32:07.568 12:54:26 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:32:07.568 ************************************ 00:32:07.568 END TEST accel_crc32c 00:32:07.568 ************************************ 00:32:07.568 12:54:26 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:07.568 00:32:07.568 real 0m2.960s 00:32:07.568 user 0m2.510s 00:32:07.568 sys 0m0.244s 00:32:07.568 12:54:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:07.568 12:54:26 -- common/autotest_common.sh@10 -- # set +x 00:32:07.568 12:54:26 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:32:07.568 12:54:26 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:32:07.568 12:54:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:07.568 12:54:26 -- common/autotest_common.sh@10 -- # set +x 00:32:07.568 ************************************ 00:32:07.568 START TEST accel_crc32c_C2 00:32:07.568 ************************************ 00:32:07.568 12:54:26 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -y -C 2 00:32:07.568 12:54:26 -- accel/accel.sh@16 -- # local accel_opc 00:32:07.568 12:54:26 -- accel/accel.sh@17 -- # local accel_module 00:32:07.568 12:54:26 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:32:07.568 12:54:26 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:32:07.568 12:54:26 -- accel/accel.sh@12 -- # build_accel_config 00:32:07.568 12:54:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:32:07.568 12:54:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:32:07.568 12:54:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:32:07.568 12:54:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:32:07.569 12:54:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:32:07.569 12:54:26 -- accel/accel.sh@41 -- # local IFS=, 00:32:07.569 12:54:26 -- accel/accel.sh@42 -- # jq -r . 00:32:07.569 [2024-07-22 12:54:26.624311] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:32:07.569 [2024-07-22 12:54:26.624408] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70207 ] 00:32:07.569 [2024-07-22 12:54:26.758030] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:07.569 [2024-07-22 12:54:26.853013] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:08.945 12:54:28 -- accel/accel.sh@18 -- # out=' 00:32:08.945 SPDK Configuration: 00:32:08.945 Core mask: 0x1 00:32:08.945 00:32:08.945 Accel Perf Configuration: 00:32:08.945 Workload Type: crc32c 00:32:08.945 CRC-32C seed: 0 00:32:08.945 Transfer size: 4096 bytes 00:32:08.945 Vector count 2 00:32:08.945 Module: software 00:32:08.945 Queue depth: 32 00:32:08.945 Allocate depth: 32 00:32:08.945 # threads/core: 1 00:32:08.945 Run time: 1 seconds 00:32:08.945 Verify: Yes 00:32:08.945 00:32:08.945 Running for 1 seconds... 00:32:08.945 00:32:08.945 Core,Thread Transfers Bandwidth Failed Miscompares 00:32:08.945 ------------------------------------------------------------------------------------ 00:32:08.945 0,0 344256/s 2689 MiB/s 0 0 00:32:08.945 ==================================================================================== 00:32:08.945 Total 344256/s 1344 MiB/s 0 0' 00:32:08.945 12:54:28 -- accel/accel.sh@20 -- # IFS=: 00:32:08.945 12:54:28 -- accel/accel.sh@20 -- # read -r var val 00:32:08.945 12:54:28 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:32:08.945 12:54:28 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:32:08.945 12:54:28 -- accel/accel.sh@12 -- # build_accel_config 00:32:08.945 12:54:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:32:08.945 12:54:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:32:08.945 12:54:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:32:08.945 12:54:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:32:08.945 12:54:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:32:08.945 12:54:28 -- accel/accel.sh@41 -- # local IFS=, 00:32:08.945 12:54:28 -- accel/accel.sh@42 -- # jq -r . 00:32:08.945 [2024-07-22 12:54:28.087051] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:32:08.945 [2024-07-22 12:54:28.087161] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70225 ] 00:32:08.945 [2024-07-22 12:54:28.224082] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:08.946 [2024-07-22 12:54:28.315412] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:09.204 12:54:28 -- accel/accel.sh@21 -- # val= 00:32:09.204 12:54:28 -- accel/accel.sh@22 -- # case "$var" in 00:32:09.204 12:54:28 -- accel/accel.sh@20 -- # IFS=: 00:32:09.204 12:54:28 -- accel/accel.sh@20 -- # read -r var val 00:32:09.204 12:54:28 -- accel/accel.sh@21 -- # val= 00:32:09.204 12:54:28 -- accel/accel.sh@22 -- # case "$var" in 00:32:09.204 12:54:28 -- accel/accel.sh@20 -- # IFS=: 00:32:09.204 12:54:28 -- accel/accel.sh@20 -- # read -r var val 00:32:09.204 12:54:28 -- accel/accel.sh@21 -- # val=0x1 00:32:09.204 12:54:28 -- accel/accel.sh@22 -- # case "$var" in 00:32:09.204 12:54:28 -- accel/accel.sh@20 -- # IFS=: 00:32:09.204 12:54:28 -- accel/accel.sh@20 -- # read -r var val 00:32:09.204 12:54:28 -- accel/accel.sh@21 -- # val= 00:32:09.204 12:54:28 -- accel/accel.sh@22 -- # case "$var" in 00:32:09.204 12:54:28 -- accel/accel.sh@20 -- # IFS=: 00:32:09.204 12:54:28 -- accel/accel.sh@20 -- # read -r var val 00:32:09.204 12:54:28 -- accel/accel.sh@21 -- # val= 00:32:09.204 12:54:28 -- accel/accel.sh@22 -- # case "$var" in 00:32:09.204 12:54:28 -- accel/accel.sh@20 -- # IFS=: 00:32:09.204 12:54:28 -- accel/accel.sh@20 -- # read -r var val 00:32:09.204 12:54:28 -- accel/accel.sh@21 -- # val=crc32c 00:32:09.204 12:54:28 -- accel/accel.sh@22 -- # case "$var" in 00:32:09.204 12:54:28 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:32:09.204 12:54:28 -- accel/accel.sh@20 -- # IFS=: 00:32:09.204 12:54:28 -- accel/accel.sh@20 -- # read -r var val 00:32:09.204 12:54:28 -- accel/accel.sh@21 -- # val=0 00:32:09.204 12:54:28 -- accel/accel.sh@22 -- # case "$var" in 00:32:09.204 12:54:28 -- accel/accel.sh@20 -- # IFS=: 00:32:09.204 12:54:28 -- accel/accel.sh@20 -- # read -r var val 00:32:09.204 12:54:28 -- accel/accel.sh@21 -- # val='4096 bytes' 00:32:09.204 12:54:28 -- accel/accel.sh@22 -- # case "$var" in 00:32:09.204 12:54:28 -- accel/accel.sh@20 -- # IFS=: 00:32:09.204 12:54:28 -- accel/accel.sh@20 -- # read -r var val 00:32:09.204 12:54:28 -- accel/accel.sh@21 -- # val= 00:32:09.204 12:54:28 -- accel/accel.sh@22 -- # case "$var" in 00:32:09.204 12:54:28 -- accel/accel.sh@20 -- # IFS=: 00:32:09.204 12:54:28 -- accel/accel.sh@20 -- # read -r var val 00:32:09.204 12:54:28 -- accel/accel.sh@21 -- # val=software 00:32:09.204 12:54:28 -- accel/accel.sh@22 -- # case "$var" in 00:32:09.204 12:54:28 -- accel/accel.sh@23 -- # accel_module=software 00:32:09.204 12:54:28 -- accel/accel.sh@20 -- # IFS=: 00:32:09.204 12:54:28 -- accel/accel.sh@20 -- # read -r var val 00:32:09.204 12:54:28 -- accel/accel.sh@21 -- # val=32 00:32:09.204 12:54:28 -- accel/accel.sh@22 -- # case "$var" in 00:32:09.204 12:54:28 -- accel/accel.sh@20 -- # IFS=: 00:32:09.204 12:54:28 -- accel/accel.sh@20 -- # read -r var val 00:32:09.204 12:54:28 -- accel/accel.sh@21 -- # val=32 00:32:09.204 12:54:28 -- accel/accel.sh@22 -- # case "$var" in 00:32:09.204 12:54:28 -- accel/accel.sh@20 -- # IFS=: 00:32:09.204 12:54:28 -- accel/accel.sh@20 -- # read -r var val 00:32:09.204 12:54:28 -- accel/accel.sh@21 -- # val=1 00:32:09.204 12:54:28 -- accel/accel.sh@22 -- # case "$var" in 00:32:09.204 12:54:28 -- accel/accel.sh@20 -- # IFS=: 00:32:09.204 12:54:28 -- accel/accel.sh@20 -- # read -r var val 00:32:09.204 12:54:28 -- accel/accel.sh@21 -- # val='1 seconds' 00:32:09.204 12:54:28 -- accel/accel.sh@22 -- # case "$var" in 00:32:09.204 12:54:28 -- accel/accel.sh@20 -- # IFS=: 00:32:09.204 12:54:28 -- accel/accel.sh@20 -- # read -r var val 00:32:09.204 12:54:28 -- accel/accel.sh@21 -- # val=Yes 00:32:09.204 12:54:28 -- accel/accel.sh@22 -- # case "$var" in 00:32:09.204 12:54:28 -- accel/accel.sh@20 -- # IFS=: 00:32:09.204 12:54:28 -- accel/accel.sh@20 -- # read -r var val 00:32:09.204 12:54:28 -- accel/accel.sh@21 -- # val= 00:32:09.205 12:54:28 -- accel/accel.sh@22 -- # case "$var" in 00:32:09.205 12:54:28 -- accel/accel.sh@20 -- # IFS=: 00:32:09.205 12:54:28 -- accel/accel.sh@20 -- # read -r var val 00:32:09.205 12:54:28 -- accel/accel.sh@21 -- # val= 00:32:09.205 12:54:28 -- accel/accel.sh@22 -- # case "$var" in 00:32:09.205 12:54:28 -- accel/accel.sh@20 -- # IFS=: 00:32:09.205 12:54:28 -- accel/accel.sh@20 -- # read -r var val 00:32:10.141 12:54:29 -- accel/accel.sh@21 -- # val= 00:32:10.141 12:54:29 -- accel/accel.sh@22 -- # case "$var" in 00:32:10.141 12:54:29 -- accel/accel.sh@20 -- # IFS=: 00:32:10.141 12:54:29 -- accel/accel.sh@20 -- # read -r var val 00:32:10.141 12:54:29 -- accel/accel.sh@21 -- # val= 00:32:10.142 12:54:29 -- accel/accel.sh@22 -- # case "$var" in 00:32:10.142 12:54:29 -- accel/accel.sh@20 -- # IFS=: 00:32:10.142 12:54:29 -- accel/accel.sh@20 -- # read -r var val 00:32:10.142 12:54:29 -- accel/accel.sh@21 -- # val= 00:32:10.142 12:54:29 -- accel/accel.sh@22 -- # case "$var" in 00:32:10.142 12:54:29 -- accel/accel.sh@20 -- # IFS=: 00:32:10.142 12:54:29 -- accel/accel.sh@20 -- # read -r var val 00:32:10.142 12:54:29 -- accel/accel.sh@21 -- # val= 00:32:10.142 12:54:29 -- accel/accel.sh@22 -- # case "$var" in 00:32:10.142 12:54:29 -- accel/accel.sh@20 -- # IFS=: 00:32:10.142 12:54:29 -- accel/accel.sh@20 -- # read -r var val 00:32:10.142 12:54:29 -- accel/accel.sh@21 -- # val= 00:32:10.142 12:54:29 -- accel/accel.sh@22 -- # case "$var" in 00:32:10.142 12:54:29 -- accel/accel.sh@20 -- # IFS=: 00:32:10.142 12:54:29 -- accel/accel.sh@20 -- # read -r var val 00:32:10.142 12:54:29 -- accel/accel.sh@21 -- # val= 00:32:10.142 12:54:29 -- accel/accel.sh@22 -- # case "$var" in 00:32:10.142 12:54:29 -- accel/accel.sh@20 -- # IFS=: 00:32:10.142 12:54:29 -- accel/accel.sh@20 -- # read -r var val 00:32:10.142 12:54:29 -- accel/accel.sh@28 -- # [[ -n software ]] 00:32:10.142 12:54:29 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:32:10.142 12:54:29 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:10.142 00:32:10.142 real 0m2.930s 00:32:10.142 user 0m2.493s 00:32:10.142 sys 0m0.228s 00:32:10.142 12:54:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:10.142 ************************************ 00:32:10.142 END TEST accel_crc32c_C2 00:32:10.142 ************************************ 00:32:10.142 12:54:29 -- common/autotest_common.sh@10 -- # set +x 00:32:10.401 12:54:29 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:32:10.401 12:54:29 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:32:10.401 12:54:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:10.401 12:54:29 -- common/autotest_common.sh@10 -- # set +x 00:32:10.401 ************************************ 00:32:10.401 START TEST accel_copy 00:32:10.401 ************************************ 00:32:10.401 12:54:29 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy -y 00:32:10.401 12:54:29 -- accel/accel.sh@16 -- # local accel_opc 00:32:10.401 12:54:29 -- accel/accel.sh@17 -- # local accel_module 00:32:10.401 12:54:29 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:32:10.401 12:54:29 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:32:10.401 12:54:29 -- accel/accel.sh@12 -- # build_accel_config 00:32:10.401 12:54:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:32:10.401 12:54:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:32:10.401 12:54:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:32:10.401 12:54:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:32:10.401 12:54:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:32:10.401 12:54:29 -- accel/accel.sh@41 -- # local IFS=, 00:32:10.401 12:54:29 -- accel/accel.sh@42 -- # jq -r . 00:32:10.401 [2024-07-22 12:54:29.592513] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:32:10.401 [2024-07-22 12:54:29.592600] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70255 ] 00:32:10.401 [2024-07-22 12:54:29.723220] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:10.401 [2024-07-22 12:54:29.818297] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:11.780 12:54:31 -- accel/accel.sh@18 -- # out=' 00:32:11.780 SPDK Configuration: 00:32:11.780 Core mask: 0x1 00:32:11.780 00:32:11.780 Accel Perf Configuration: 00:32:11.780 Workload Type: copy 00:32:11.780 Transfer size: 4096 bytes 00:32:11.780 Vector count 1 00:32:11.780 Module: software 00:32:11.780 Queue depth: 32 00:32:11.780 Allocate depth: 32 00:32:11.780 # threads/core: 1 00:32:11.780 Run time: 1 seconds 00:32:11.780 Verify: Yes 00:32:11.780 00:32:11.780 Running for 1 seconds... 00:32:11.780 00:32:11.780 Core,Thread Transfers Bandwidth Failed Miscompares 00:32:11.780 ------------------------------------------------------------------------------------ 00:32:11.780 0,0 308800/s 1206 MiB/s 0 0 00:32:11.780 ==================================================================================== 00:32:11.780 Total 308800/s 1206 MiB/s 0 0' 00:32:11.780 12:54:31 -- accel/accel.sh@20 -- # IFS=: 00:32:11.780 12:54:31 -- accel/accel.sh@20 -- # read -r var val 00:32:11.780 12:54:31 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:32:11.780 12:54:31 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:32:11.780 12:54:31 -- accel/accel.sh@12 -- # build_accel_config 00:32:11.780 12:54:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:32:11.780 12:54:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:32:11.780 12:54:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:32:11.780 12:54:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:32:11.780 12:54:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:32:11.780 12:54:31 -- accel/accel.sh@41 -- # local IFS=, 00:32:11.780 12:54:31 -- accel/accel.sh@42 -- # jq -r . 00:32:11.780 [2024-07-22 12:54:31.060951] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:32:11.780 [2024-07-22 12:54:31.061056] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70275 ] 00:32:11.780 [2024-07-22 12:54:31.197467] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:12.039 [2024-07-22 12:54:31.294734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:12.039 12:54:31 -- accel/accel.sh@21 -- # val= 00:32:12.039 12:54:31 -- accel/accel.sh@22 -- # case "$var" in 00:32:12.039 12:54:31 -- accel/accel.sh@20 -- # IFS=: 00:32:12.039 12:54:31 -- accel/accel.sh@20 -- # read -r var val 00:32:12.039 12:54:31 -- accel/accel.sh@21 -- # val= 00:32:12.039 12:54:31 -- accel/accel.sh@22 -- # case "$var" in 00:32:12.039 12:54:31 -- accel/accel.sh@20 -- # IFS=: 00:32:12.039 12:54:31 -- accel/accel.sh@20 -- # read -r var val 00:32:12.039 12:54:31 -- accel/accel.sh@21 -- # val=0x1 00:32:12.039 12:54:31 -- accel/accel.sh@22 -- # case "$var" in 00:32:12.039 12:54:31 -- accel/accel.sh@20 -- # IFS=: 00:32:12.039 12:54:31 -- accel/accel.sh@20 -- # read -r var val 00:32:12.039 12:54:31 -- accel/accel.sh@21 -- # val= 00:32:12.039 12:54:31 -- accel/accel.sh@22 -- # case "$var" in 00:32:12.039 12:54:31 -- accel/accel.sh@20 -- # IFS=: 00:32:12.039 12:54:31 -- accel/accel.sh@20 -- # read -r var val 00:32:12.039 12:54:31 -- accel/accel.sh@21 -- # val= 00:32:12.039 12:54:31 -- accel/accel.sh@22 -- # case "$var" in 00:32:12.039 12:54:31 -- accel/accel.sh@20 -- # IFS=: 00:32:12.039 12:54:31 -- accel/accel.sh@20 -- # read -r var val 00:32:12.039 12:54:31 -- accel/accel.sh@21 -- # val=copy 00:32:12.039 12:54:31 -- accel/accel.sh@22 -- # case "$var" in 00:32:12.039 12:54:31 -- accel/accel.sh@24 -- # accel_opc=copy 00:32:12.039 12:54:31 -- accel/accel.sh@20 -- # IFS=: 00:32:12.039 12:54:31 -- accel/accel.sh@20 -- # read -r var val 00:32:12.039 12:54:31 -- accel/accel.sh@21 -- # val='4096 bytes' 00:32:12.039 12:54:31 -- accel/accel.sh@22 -- # case "$var" in 00:32:12.039 12:54:31 -- accel/accel.sh@20 -- # IFS=: 00:32:12.039 12:54:31 -- accel/accel.sh@20 -- # read -r var val 00:32:12.039 12:54:31 -- accel/accel.sh@21 -- # val= 00:32:12.039 12:54:31 -- accel/accel.sh@22 -- # case "$var" in 00:32:12.039 12:54:31 -- accel/accel.sh@20 -- # IFS=: 00:32:12.039 12:54:31 -- accel/accel.sh@20 -- # read -r var val 00:32:12.039 12:54:31 -- accel/accel.sh@21 -- # val=software 00:32:12.039 12:54:31 -- accel/accel.sh@22 -- # case "$var" in 00:32:12.039 12:54:31 -- accel/accel.sh@23 -- # accel_module=software 00:32:12.039 12:54:31 -- accel/accel.sh@20 -- # IFS=: 00:32:12.039 12:54:31 -- accel/accel.sh@20 -- # read -r var val 00:32:12.039 12:54:31 -- accel/accel.sh@21 -- # val=32 00:32:12.039 12:54:31 -- accel/accel.sh@22 -- # case "$var" in 00:32:12.039 12:54:31 -- accel/accel.sh@20 -- # IFS=: 00:32:12.039 12:54:31 -- accel/accel.sh@20 -- # read -r var val 00:32:12.039 12:54:31 -- accel/accel.sh@21 -- # val=32 00:32:12.039 12:54:31 -- accel/accel.sh@22 -- # case "$var" in 00:32:12.039 12:54:31 -- accel/accel.sh@20 -- # IFS=: 00:32:12.039 12:54:31 -- accel/accel.sh@20 -- # read -r var val 00:32:12.039 12:54:31 -- accel/accel.sh@21 -- # val=1 00:32:12.039 12:54:31 -- accel/accel.sh@22 -- # case "$var" in 00:32:12.039 12:54:31 -- accel/accel.sh@20 -- # IFS=: 00:32:12.039 12:54:31 -- accel/accel.sh@20 -- # read -r var val 00:32:12.039 12:54:31 -- accel/accel.sh@21 -- # val='1 seconds' 00:32:12.039 12:54:31 -- accel/accel.sh@22 -- # case "$var" in 00:32:12.039 12:54:31 -- accel/accel.sh@20 -- # IFS=: 00:32:12.040 12:54:31 -- accel/accel.sh@20 -- # read -r var val 00:32:12.040 12:54:31 -- accel/accel.sh@21 -- # val=Yes 00:32:12.040 12:54:31 -- accel/accel.sh@22 -- # case "$var" in 00:32:12.040 12:54:31 -- accel/accel.sh@20 -- # IFS=: 00:32:12.040 12:54:31 -- accel/accel.sh@20 -- # read -r var val 00:32:12.040 12:54:31 -- accel/accel.sh@21 -- # val= 00:32:12.040 12:54:31 -- accel/accel.sh@22 -- # case "$var" in 00:32:12.040 12:54:31 -- accel/accel.sh@20 -- # IFS=: 00:32:12.040 12:54:31 -- accel/accel.sh@20 -- # read -r var val 00:32:12.040 12:54:31 -- accel/accel.sh@21 -- # val= 00:32:12.040 12:54:31 -- accel/accel.sh@22 -- # case "$var" in 00:32:12.040 12:54:31 -- accel/accel.sh@20 -- # IFS=: 00:32:12.040 12:54:31 -- accel/accel.sh@20 -- # read -r var val 00:32:13.417 12:54:32 -- accel/accel.sh@21 -- # val= 00:32:13.417 12:54:32 -- accel/accel.sh@22 -- # case "$var" in 00:32:13.417 12:54:32 -- accel/accel.sh@20 -- # IFS=: 00:32:13.417 12:54:32 -- accel/accel.sh@20 -- # read -r var val 00:32:13.417 12:54:32 -- accel/accel.sh@21 -- # val= 00:32:13.417 12:54:32 -- accel/accel.sh@22 -- # case "$var" in 00:32:13.417 12:54:32 -- accel/accel.sh@20 -- # IFS=: 00:32:13.418 12:54:32 -- accel/accel.sh@20 -- # read -r var val 00:32:13.418 12:54:32 -- accel/accel.sh@21 -- # val= 00:32:13.418 12:54:32 -- accel/accel.sh@22 -- # case "$var" in 00:32:13.418 12:54:32 -- accel/accel.sh@20 -- # IFS=: 00:32:13.418 12:54:32 -- accel/accel.sh@20 -- # read -r var val 00:32:13.418 12:54:32 -- accel/accel.sh@21 -- # val= 00:32:13.418 12:54:32 -- accel/accel.sh@22 -- # case "$var" in 00:32:13.418 12:54:32 -- accel/accel.sh@20 -- # IFS=: 00:32:13.418 12:54:32 -- accel/accel.sh@20 -- # read -r var val 00:32:13.418 12:54:32 -- accel/accel.sh@21 -- # val= 00:32:13.418 12:54:32 -- accel/accel.sh@22 -- # case "$var" in 00:32:13.418 12:54:32 -- accel/accel.sh@20 -- # IFS=: 00:32:13.418 12:54:32 -- accel/accel.sh@20 -- # read -r var val 00:32:13.418 12:54:32 -- accel/accel.sh@21 -- # val= 00:32:13.418 12:54:32 -- accel/accel.sh@22 -- # case "$var" in 00:32:13.418 12:54:32 -- accel/accel.sh@20 -- # IFS=: 00:32:13.418 12:54:32 -- accel/accel.sh@20 -- # read -r var val 00:32:13.418 12:54:32 -- accel/accel.sh@28 -- # [[ -n software ]] 00:32:13.418 12:54:32 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:32:13.418 12:54:32 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:13.418 00:32:13.418 real 0m2.937s 00:32:13.418 user 0m2.513s 00:32:13.418 sys 0m0.218s 00:32:13.418 12:54:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:13.418 ************************************ 00:32:13.418 12:54:32 -- common/autotest_common.sh@10 -- # set +x 00:32:13.418 END TEST accel_copy 00:32:13.418 ************************************ 00:32:13.418 12:54:32 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:32:13.418 12:54:32 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:32:13.418 12:54:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:13.418 12:54:32 -- common/autotest_common.sh@10 -- # set +x 00:32:13.418 ************************************ 00:32:13.418 START TEST accel_fill 00:32:13.418 ************************************ 00:32:13.418 12:54:32 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:32:13.418 12:54:32 -- accel/accel.sh@16 -- # local accel_opc 00:32:13.418 12:54:32 -- accel/accel.sh@17 -- # local accel_module 00:32:13.418 12:54:32 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:32:13.418 12:54:32 -- accel/accel.sh@12 -- # build_accel_config 00:32:13.418 12:54:32 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:32:13.418 12:54:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:32:13.418 12:54:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:32:13.418 12:54:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:32:13.418 12:54:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:32:13.418 12:54:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:32:13.418 12:54:32 -- accel/accel.sh@41 -- # local IFS=, 00:32:13.418 12:54:32 -- accel/accel.sh@42 -- # jq -r . 00:32:13.418 [2024-07-22 12:54:32.580868] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:32:13.418 [2024-07-22 12:54:32.580969] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70309 ] 00:32:13.418 [2024-07-22 12:54:32.717175] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:13.418 [2024-07-22 12:54:32.811578] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:14.791 12:54:34 -- accel/accel.sh@18 -- # out=' 00:32:14.791 SPDK Configuration: 00:32:14.791 Core mask: 0x1 00:32:14.791 00:32:14.791 Accel Perf Configuration: 00:32:14.791 Workload Type: fill 00:32:14.791 Fill pattern: 0x80 00:32:14.791 Transfer size: 4096 bytes 00:32:14.791 Vector count 1 00:32:14.791 Module: software 00:32:14.791 Queue depth: 64 00:32:14.791 Allocate depth: 64 00:32:14.791 # threads/core: 1 00:32:14.791 Run time: 1 seconds 00:32:14.791 Verify: Yes 00:32:14.791 00:32:14.791 Running for 1 seconds... 00:32:14.791 00:32:14.791 Core,Thread Transfers Bandwidth Failed Miscompares 00:32:14.791 ------------------------------------------------------------------------------------ 00:32:14.791 0,0 452608/s 1768 MiB/s 0 0 00:32:14.791 ==================================================================================== 00:32:14.791 Total 452608/s 1768 MiB/s 0 0' 00:32:14.791 12:54:34 -- accel/accel.sh@20 -- # IFS=: 00:32:14.791 12:54:34 -- accel/accel.sh@20 -- # read -r var val 00:32:14.791 12:54:34 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:32:14.791 12:54:34 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:32:14.791 12:54:34 -- accel/accel.sh@12 -- # build_accel_config 00:32:14.791 12:54:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:32:14.791 12:54:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:32:14.791 12:54:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:32:14.791 12:54:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:32:14.791 12:54:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:32:14.791 12:54:34 -- accel/accel.sh@41 -- # local IFS=, 00:32:14.791 12:54:34 -- accel/accel.sh@42 -- # jq -r . 00:32:14.791 [2024-07-22 12:54:34.050008] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:32:14.791 [2024-07-22 12:54:34.050158] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70329 ] 00:32:14.791 [2024-07-22 12:54:34.192258] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:15.050 [2024-07-22 12:54:34.286655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:15.050 12:54:34 -- accel/accel.sh@21 -- # val= 00:32:15.050 12:54:34 -- accel/accel.sh@22 -- # case "$var" in 00:32:15.050 12:54:34 -- accel/accel.sh@20 -- # IFS=: 00:32:15.050 12:54:34 -- accel/accel.sh@20 -- # read -r var val 00:32:15.050 12:54:34 -- accel/accel.sh@21 -- # val= 00:32:15.050 12:54:34 -- accel/accel.sh@22 -- # case "$var" in 00:32:15.050 12:54:34 -- accel/accel.sh@20 -- # IFS=: 00:32:15.050 12:54:34 -- accel/accel.sh@20 -- # read -r var val 00:32:15.050 12:54:34 -- accel/accel.sh@21 -- # val=0x1 00:32:15.050 12:54:34 -- accel/accel.sh@22 -- # case "$var" in 00:32:15.050 12:54:34 -- accel/accel.sh@20 -- # IFS=: 00:32:15.050 12:54:34 -- accel/accel.sh@20 -- # read -r var val 00:32:15.050 12:54:34 -- accel/accel.sh@21 -- # val= 00:32:15.050 12:54:34 -- accel/accel.sh@22 -- # case "$var" in 00:32:15.050 12:54:34 -- accel/accel.sh@20 -- # IFS=: 00:32:15.050 12:54:34 -- accel/accel.sh@20 -- # read -r var val 00:32:15.050 12:54:34 -- accel/accel.sh@21 -- # val= 00:32:15.050 12:54:34 -- accel/accel.sh@22 -- # case "$var" in 00:32:15.050 12:54:34 -- accel/accel.sh@20 -- # IFS=: 00:32:15.050 12:54:34 -- accel/accel.sh@20 -- # read -r var val 00:32:15.050 12:54:34 -- accel/accel.sh@21 -- # val=fill 00:32:15.050 12:54:34 -- accel/accel.sh@22 -- # case "$var" in 00:32:15.050 12:54:34 -- accel/accel.sh@24 -- # accel_opc=fill 00:32:15.050 12:54:34 -- accel/accel.sh@20 -- # IFS=: 00:32:15.050 12:54:34 -- accel/accel.sh@20 -- # read -r var val 00:32:15.050 12:54:34 -- accel/accel.sh@21 -- # val=0x80 00:32:15.050 12:54:34 -- accel/accel.sh@22 -- # case "$var" in 00:32:15.050 12:54:34 -- accel/accel.sh@20 -- # IFS=: 00:32:15.050 12:54:34 -- accel/accel.sh@20 -- # read -r var val 00:32:15.050 12:54:34 -- accel/accel.sh@21 -- # val='4096 bytes' 00:32:15.050 12:54:34 -- accel/accel.sh@22 -- # case "$var" in 00:32:15.050 12:54:34 -- accel/accel.sh@20 -- # IFS=: 00:32:15.050 12:54:34 -- accel/accel.sh@20 -- # read -r var val 00:32:15.050 12:54:34 -- accel/accel.sh@21 -- # val= 00:32:15.050 12:54:34 -- accel/accel.sh@22 -- # case "$var" in 00:32:15.050 12:54:34 -- accel/accel.sh@20 -- # IFS=: 00:32:15.050 12:54:34 -- accel/accel.sh@20 -- # read -r var val 00:32:15.050 12:54:34 -- accel/accel.sh@21 -- # val=software 00:32:15.050 12:54:34 -- accel/accel.sh@22 -- # case "$var" in 00:32:15.050 12:54:34 -- accel/accel.sh@23 -- # accel_module=software 00:32:15.050 12:54:34 -- accel/accel.sh@20 -- # IFS=: 00:32:15.050 12:54:34 -- accel/accel.sh@20 -- # read -r var val 00:32:15.050 12:54:34 -- accel/accel.sh@21 -- # val=64 00:32:15.050 12:54:34 -- accel/accel.sh@22 -- # case "$var" in 00:32:15.050 12:54:34 -- accel/accel.sh@20 -- # IFS=: 00:32:15.050 12:54:34 -- accel/accel.sh@20 -- # read -r var val 00:32:15.050 12:54:34 -- accel/accel.sh@21 -- # val=64 00:32:15.050 12:54:34 -- accel/accel.sh@22 -- # case "$var" in 00:32:15.050 12:54:34 -- accel/accel.sh@20 -- # IFS=: 00:32:15.050 12:54:34 -- accel/accel.sh@20 -- # read -r var val 00:32:15.050 12:54:34 -- accel/accel.sh@21 -- # val=1 00:32:15.050 12:54:34 -- accel/accel.sh@22 -- # case "$var" in 00:32:15.050 12:54:34 -- accel/accel.sh@20 -- # IFS=: 00:32:15.050 12:54:34 -- accel/accel.sh@20 -- # read -r var val 00:32:15.050 12:54:34 -- accel/accel.sh@21 -- # val='1 seconds' 00:32:15.050 12:54:34 -- accel/accel.sh@22 -- # case "$var" in 00:32:15.050 12:54:34 -- accel/accel.sh@20 -- # IFS=: 00:32:15.050 12:54:34 -- accel/accel.sh@20 -- # read -r var val 00:32:15.050 12:54:34 -- accel/accel.sh@21 -- # val=Yes 00:32:15.050 12:54:34 -- accel/accel.sh@22 -- # case "$var" in 00:32:15.050 12:54:34 -- accel/accel.sh@20 -- # IFS=: 00:32:15.050 12:54:34 -- accel/accel.sh@20 -- # read -r var val 00:32:15.050 12:54:34 -- accel/accel.sh@21 -- # val= 00:32:15.050 12:54:34 -- accel/accel.sh@22 -- # case "$var" in 00:32:15.050 12:54:34 -- accel/accel.sh@20 -- # IFS=: 00:32:15.050 12:54:34 -- accel/accel.sh@20 -- # read -r var val 00:32:15.050 12:54:34 -- accel/accel.sh@21 -- # val= 00:32:15.050 12:54:34 -- accel/accel.sh@22 -- # case "$var" in 00:32:15.050 12:54:34 -- accel/accel.sh@20 -- # IFS=: 00:32:15.050 12:54:34 -- accel/accel.sh@20 -- # read -r var val 00:32:16.424 12:54:35 -- accel/accel.sh@21 -- # val= 00:32:16.424 12:54:35 -- accel/accel.sh@22 -- # case "$var" in 00:32:16.424 12:54:35 -- accel/accel.sh@20 -- # IFS=: 00:32:16.424 12:54:35 -- accel/accel.sh@20 -- # read -r var val 00:32:16.424 12:54:35 -- accel/accel.sh@21 -- # val= 00:32:16.424 12:54:35 -- accel/accel.sh@22 -- # case "$var" in 00:32:16.424 12:54:35 -- accel/accel.sh@20 -- # IFS=: 00:32:16.424 12:54:35 -- accel/accel.sh@20 -- # read -r var val 00:32:16.425 12:54:35 -- accel/accel.sh@21 -- # val= 00:32:16.425 12:54:35 -- accel/accel.sh@22 -- # case "$var" in 00:32:16.425 12:54:35 -- accel/accel.sh@20 -- # IFS=: 00:32:16.425 12:54:35 -- accel/accel.sh@20 -- # read -r var val 00:32:16.425 12:54:35 -- accel/accel.sh@21 -- # val= 00:32:16.425 12:54:35 -- accel/accel.sh@22 -- # case "$var" in 00:32:16.425 12:54:35 -- accel/accel.sh@20 -- # IFS=: 00:32:16.425 12:54:35 -- accel/accel.sh@20 -- # read -r var val 00:32:16.425 12:54:35 -- accel/accel.sh@21 -- # val= 00:32:16.425 12:54:35 -- accel/accel.sh@22 -- # case "$var" in 00:32:16.425 12:54:35 -- accel/accel.sh@20 -- # IFS=: 00:32:16.425 12:54:35 -- accel/accel.sh@20 -- # read -r var val 00:32:16.425 12:54:35 -- accel/accel.sh@21 -- # val= 00:32:16.425 12:54:35 -- accel/accel.sh@22 -- # case "$var" in 00:32:16.425 12:54:35 -- accel/accel.sh@20 -- # IFS=: 00:32:16.425 12:54:35 -- accel/accel.sh@20 -- # read -r var val 00:32:16.425 12:54:35 -- accel/accel.sh@28 -- # [[ -n software ]] 00:32:16.425 12:54:35 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:32:16.425 12:54:35 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:16.425 00:32:16.425 real 0m2.944s 00:32:16.425 user 0m2.491s 00:32:16.425 sys 0m0.245s 00:32:16.425 12:54:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:16.425 12:54:35 -- common/autotest_common.sh@10 -- # set +x 00:32:16.425 ************************************ 00:32:16.425 END TEST accel_fill 00:32:16.425 ************************************ 00:32:16.425 12:54:35 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:32:16.425 12:54:35 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:32:16.425 12:54:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:16.425 12:54:35 -- common/autotest_common.sh@10 -- # set +x 00:32:16.425 ************************************ 00:32:16.425 START TEST accel_copy_crc32c 00:32:16.425 ************************************ 00:32:16.425 12:54:35 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y 00:32:16.425 12:54:35 -- accel/accel.sh@16 -- # local accel_opc 00:32:16.425 12:54:35 -- accel/accel.sh@17 -- # local accel_module 00:32:16.425 12:54:35 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:32:16.425 12:54:35 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:32:16.425 12:54:35 -- accel/accel.sh@12 -- # build_accel_config 00:32:16.425 12:54:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:32:16.425 12:54:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:32:16.425 12:54:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:32:16.425 12:54:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:32:16.425 12:54:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:32:16.425 12:54:35 -- accel/accel.sh@41 -- # local IFS=, 00:32:16.425 12:54:35 -- accel/accel.sh@42 -- # jq -r . 00:32:16.425 [2024-07-22 12:54:35.567744] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:32:16.425 [2024-07-22 12:54:35.567835] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70363 ] 00:32:16.425 [2024-07-22 12:54:35.698631] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:16.425 [2024-07-22 12:54:35.792724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:17.798 12:54:37 -- accel/accel.sh@18 -- # out=' 00:32:17.798 SPDK Configuration: 00:32:17.798 Core mask: 0x1 00:32:17.798 00:32:17.798 Accel Perf Configuration: 00:32:17.798 Workload Type: copy_crc32c 00:32:17.798 CRC-32C seed: 0 00:32:17.798 Vector size: 4096 bytes 00:32:17.798 Transfer size: 4096 bytes 00:32:17.798 Vector count 1 00:32:17.798 Module: software 00:32:17.798 Queue depth: 32 00:32:17.798 Allocate depth: 32 00:32:17.798 # threads/core: 1 00:32:17.798 Run time: 1 seconds 00:32:17.798 Verify: Yes 00:32:17.798 00:32:17.798 Running for 1 seconds... 00:32:17.798 00:32:17.798 Core,Thread Transfers Bandwidth Failed Miscompares 00:32:17.798 ------------------------------------------------------------------------------------ 00:32:17.798 0,0 245952/s 960 MiB/s 0 0 00:32:17.798 ==================================================================================== 00:32:17.798 Total 245952/s 960 MiB/s 0 0' 00:32:17.798 12:54:37 -- accel/accel.sh@20 -- # IFS=: 00:32:17.798 12:54:37 -- accel/accel.sh@20 -- # read -r var val 00:32:17.798 12:54:37 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:32:17.798 12:54:37 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:32:17.798 12:54:37 -- accel/accel.sh@12 -- # build_accel_config 00:32:17.798 12:54:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:32:17.798 12:54:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:32:17.798 12:54:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:32:17.798 12:54:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:32:17.798 12:54:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:32:17.798 12:54:37 -- accel/accel.sh@41 -- # local IFS=, 00:32:17.798 12:54:37 -- accel/accel.sh@42 -- # jq -r . 00:32:17.798 [2024-07-22 12:54:37.031707] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:32:17.798 [2024-07-22 12:54:37.031807] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70385 ] 00:32:17.798 [2024-07-22 12:54:37.167668] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:18.057 [2024-07-22 12:54:37.263583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:18.057 12:54:37 -- accel/accel.sh@21 -- # val= 00:32:18.057 12:54:37 -- accel/accel.sh@22 -- # case "$var" in 00:32:18.057 12:54:37 -- accel/accel.sh@20 -- # IFS=: 00:32:18.057 12:54:37 -- accel/accel.sh@20 -- # read -r var val 00:32:18.057 12:54:37 -- accel/accel.sh@21 -- # val= 00:32:18.057 12:54:37 -- accel/accel.sh@22 -- # case "$var" in 00:32:18.057 12:54:37 -- accel/accel.sh@20 -- # IFS=: 00:32:18.057 12:54:37 -- accel/accel.sh@20 -- # read -r var val 00:32:18.057 12:54:37 -- accel/accel.sh@21 -- # val=0x1 00:32:18.057 12:54:37 -- accel/accel.sh@22 -- # case "$var" in 00:32:18.057 12:54:37 -- accel/accel.sh@20 -- # IFS=: 00:32:18.057 12:54:37 -- accel/accel.sh@20 -- # read -r var val 00:32:18.057 12:54:37 -- accel/accel.sh@21 -- # val= 00:32:18.057 12:54:37 -- accel/accel.sh@22 -- # case "$var" in 00:32:18.057 12:54:37 -- accel/accel.sh@20 -- # IFS=: 00:32:18.057 12:54:37 -- accel/accel.sh@20 -- # read -r var val 00:32:18.057 12:54:37 -- accel/accel.sh@21 -- # val= 00:32:18.057 12:54:37 -- accel/accel.sh@22 -- # case "$var" in 00:32:18.057 12:54:37 -- accel/accel.sh@20 -- # IFS=: 00:32:18.057 12:54:37 -- accel/accel.sh@20 -- # read -r var val 00:32:18.057 12:54:37 -- accel/accel.sh@21 -- # val=copy_crc32c 00:32:18.057 12:54:37 -- accel/accel.sh@22 -- # case "$var" in 00:32:18.057 12:54:37 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:32:18.057 12:54:37 -- accel/accel.sh@20 -- # IFS=: 00:32:18.057 12:54:37 -- accel/accel.sh@20 -- # read -r var val 00:32:18.057 12:54:37 -- accel/accel.sh@21 -- # val=0 00:32:18.057 12:54:37 -- accel/accel.sh@22 -- # case "$var" in 00:32:18.057 12:54:37 -- accel/accel.sh@20 -- # IFS=: 00:32:18.057 12:54:37 -- accel/accel.sh@20 -- # read -r var val 00:32:18.057 12:54:37 -- accel/accel.sh@21 -- # val='4096 bytes' 00:32:18.057 12:54:37 -- accel/accel.sh@22 -- # case "$var" in 00:32:18.057 12:54:37 -- accel/accel.sh@20 -- # IFS=: 00:32:18.057 12:54:37 -- accel/accel.sh@20 -- # read -r var val 00:32:18.057 12:54:37 -- accel/accel.sh@21 -- # val='4096 bytes' 00:32:18.057 12:54:37 -- accel/accel.sh@22 -- # case "$var" in 00:32:18.057 12:54:37 -- accel/accel.sh@20 -- # IFS=: 00:32:18.057 12:54:37 -- accel/accel.sh@20 -- # read -r var val 00:32:18.057 12:54:37 -- accel/accel.sh@21 -- # val= 00:32:18.057 12:54:37 -- accel/accel.sh@22 -- # case "$var" in 00:32:18.057 12:54:37 -- accel/accel.sh@20 -- # IFS=: 00:32:18.057 12:54:37 -- accel/accel.sh@20 -- # read -r var val 00:32:18.057 12:54:37 -- accel/accel.sh@21 -- # val=software 00:32:18.057 12:54:37 -- accel/accel.sh@22 -- # case "$var" in 00:32:18.057 12:54:37 -- accel/accel.sh@23 -- # accel_module=software 00:32:18.057 12:54:37 -- accel/accel.sh@20 -- # IFS=: 00:32:18.057 12:54:37 -- accel/accel.sh@20 -- # read -r var val 00:32:18.057 12:54:37 -- accel/accel.sh@21 -- # val=32 00:32:18.057 12:54:37 -- accel/accel.sh@22 -- # case "$var" in 00:32:18.057 12:54:37 -- accel/accel.sh@20 -- # IFS=: 00:32:18.057 12:54:37 -- accel/accel.sh@20 -- # read -r var val 00:32:18.057 12:54:37 -- accel/accel.sh@21 -- # val=32 00:32:18.057 12:54:37 -- accel/accel.sh@22 -- # case "$var" in 00:32:18.057 12:54:37 -- accel/accel.sh@20 -- # IFS=: 00:32:18.057 12:54:37 -- accel/accel.sh@20 -- # read -r var val 00:32:18.057 12:54:37 -- accel/accel.sh@21 -- # val=1 00:32:18.057 12:54:37 -- accel/accel.sh@22 -- # case "$var" in 00:32:18.057 12:54:37 -- accel/accel.sh@20 -- # IFS=: 00:32:18.057 12:54:37 -- accel/accel.sh@20 -- # read -r var val 00:32:18.057 12:54:37 -- accel/accel.sh@21 -- # val='1 seconds' 00:32:18.057 12:54:37 -- accel/accel.sh@22 -- # case "$var" in 00:32:18.057 12:54:37 -- accel/accel.sh@20 -- # IFS=: 00:32:18.057 12:54:37 -- accel/accel.sh@20 -- # read -r var val 00:32:18.057 12:54:37 -- accel/accel.sh@21 -- # val=Yes 00:32:18.057 12:54:37 -- accel/accel.sh@22 -- # case "$var" in 00:32:18.057 12:54:37 -- accel/accel.sh@20 -- # IFS=: 00:32:18.057 12:54:37 -- accel/accel.sh@20 -- # read -r var val 00:32:18.057 12:54:37 -- accel/accel.sh@21 -- # val= 00:32:18.057 12:54:37 -- accel/accel.sh@22 -- # case "$var" in 00:32:18.057 12:54:37 -- accel/accel.sh@20 -- # IFS=: 00:32:18.057 12:54:37 -- accel/accel.sh@20 -- # read -r var val 00:32:18.057 12:54:37 -- accel/accel.sh@21 -- # val= 00:32:18.057 12:54:37 -- accel/accel.sh@22 -- # case "$var" in 00:32:18.057 12:54:37 -- accel/accel.sh@20 -- # IFS=: 00:32:18.057 12:54:37 -- accel/accel.sh@20 -- # read -r var val 00:32:19.434 12:54:38 -- accel/accel.sh@21 -- # val= 00:32:19.434 12:54:38 -- accel/accel.sh@22 -- # case "$var" in 00:32:19.434 12:54:38 -- accel/accel.sh@20 -- # IFS=: 00:32:19.434 12:54:38 -- accel/accel.sh@20 -- # read -r var val 00:32:19.434 12:54:38 -- accel/accel.sh@21 -- # val= 00:32:19.434 12:54:38 -- accel/accel.sh@22 -- # case "$var" in 00:32:19.434 12:54:38 -- accel/accel.sh@20 -- # IFS=: 00:32:19.434 12:54:38 -- accel/accel.sh@20 -- # read -r var val 00:32:19.434 12:54:38 -- accel/accel.sh@21 -- # val= 00:32:19.434 12:54:38 -- accel/accel.sh@22 -- # case "$var" in 00:32:19.434 12:54:38 -- accel/accel.sh@20 -- # IFS=: 00:32:19.434 12:54:38 -- accel/accel.sh@20 -- # read -r var val 00:32:19.434 12:54:38 -- accel/accel.sh@21 -- # val= 00:32:19.434 12:54:38 -- accel/accel.sh@22 -- # case "$var" in 00:32:19.434 12:54:38 -- accel/accel.sh@20 -- # IFS=: 00:32:19.434 12:54:38 -- accel/accel.sh@20 -- # read -r var val 00:32:19.434 12:54:38 -- accel/accel.sh@21 -- # val= 00:32:19.434 12:54:38 -- accel/accel.sh@22 -- # case "$var" in 00:32:19.434 12:54:38 -- accel/accel.sh@20 -- # IFS=: 00:32:19.434 12:54:38 -- accel/accel.sh@20 -- # read -r var val 00:32:19.434 12:54:38 -- accel/accel.sh@21 -- # val= 00:32:19.434 12:54:38 -- accel/accel.sh@22 -- # case "$var" in 00:32:19.434 12:54:38 -- accel/accel.sh@20 -- # IFS=: 00:32:19.434 12:54:38 -- accel/accel.sh@20 -- # read -r var val 00:32:19.434 12:54:38 -- accel/accel.sh@28 -- # [[ -n software ]] 00:32:19.434 12:54:38 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:32:19.434 12:54:38 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:19.434 00:32:19.434 real 0m2.938s 00:32:19.434 user 0m2.502s 00:32:19.434 sys 0m0.230s 00:32:19.434 12:54:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:19.434 ************************************ 00:32:19.434 12:54:38 -- common/autotest_common.sh@10 -- # set +x 00:32:19.434 END TEST accel_copy_crc32c 00:32:19.434 ************************************ 00:32:19.434 12:54:38 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:32:19.434 12:54:38 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:32:19.434 12:54:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:19.434 12:54:38 -- common/autotest_common.sh@10 -- # set +x 00:32:19.434 ************************************ 00:32:19.434 START TEST accel_copy_crc32c_C2 00:32:19.434 ************************************ 00:32:19.434 12:54:38 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:32:19.434 12:54:38 -- accel/accel.sh@16 -- # local accel_opc 00:32:19.434 12:54:38 -- accel/accel.sh@17 -- # local accel_module 00:32:19.434 12:54:38 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:32:19.434 12:54:38 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:32:19.434 12:54:38 -- accel/accel.sh@12 -- # build_accel_config 00:32:19.434 12:54:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:32:19.434 12:54:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:32:19.434 12:54:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:32:19.434 12:54:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:32:19.434 12:54:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:32:19.434 12:54:38 -- accel/accel.sh@41 -- # local IFS=, 00:32:19.434 12:54:38 -- accel/accel.sh@42 -- # jq -r . 00:32:19.434 [2024-07-22 12:54:38.567413] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:32:19.434 [2024-07-22 12:54:38.567541] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70418 ] 00:32:19.435 [2024-07-22 12:54:38.716307] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:19.435 [2024-07-22 12:54:38.814086] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:20.810 12:54:40 -- accel/accel.sh@18 -- # out=' 00:32:20.810 SPDK Configuration: 00:32:20.810 Core mask: 0x1 00:32:20.810 00:32:20.810 Accel Perf Configuration: 00:32:20.810 Workload Type: copy_crc32c 00:32:20.810 CRC-32C seed: 0 00:32:20.810 Vector size: 4096 bytes 00:32:20.810 Transfer size: 8192 bytes 00:32:20.810 Vector count 2 00:32:20.810 Module: software 00:32:20.810 Queue depth: 32 00:32:20.810 Allocate depth: 32 00:32:20.810 # threads/core: 1 00:32:20.810 Run time: 1 seconds 00:32:20.810 Verify: Yes 00:32:20.810 00:32:20.810 Running for 1 seconds... 00:32:20.810 00:32:20.810 Core,Thread Transfers Bandwidth Failed Miscompares 00:32:20.810 ------------------------------------------------------------------------------------ 00:32:20.810 0,0 175648/s 1372 MiB/s 0 0 00:32:20.810 ==================================================================================== 00:32:20.810 Total 175648/s 686 MiB/s 0 0' 00:32:20.810 12:54:40 -- accel/accel.sh@20 -- # IFS=: 00:32:20.810 12:54:40 -- accel/accel.sh@20 -- # read -r var val 00:32:20.810 12:54:40 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:32:20.810 12:54:40 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:32:20.810 12:54:40 -- accel/accel.sh@12 -- # build_accel_config 00:32:20.810 12:54:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:32:20.810 12:54:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:32:20.810 12:54:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:32:20.810 12:54:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:32:20.810 12:54:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:32:20.810 12:54:40 -- accel/accel.sh@41 -- # local IFS=, 00:32:20.810 12:54:40 -- accel/accel.sh@42 -- # jq -r . 00:32:20.810 [2024-07-22 12:54:40.056485] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:32:20.810 [2024-07-22 12:54:40.056593] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70439 ] 00:32:20.810 [2024-07-22 12:54:40.192093] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:21.069 [2024-07-22 12:54:40.285988] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:21.069 12:54:40 -- accel/accel.sh@21 -- # val= 00:32:21.069 12:54:40 -- accel/accel.sh@22 -- # case "$var" in 00:32:21.069 12:54:40 -- accel/accel.sh@20 -- # IFS=: 00:32:21.069 12:54:40 -- accel/accel.sh@20 -- # read -r var val 00:32:21.069 12:54:40 -- accel/accel.sh@21 -- # val= 00:32:21.069 12:54:40 -- accel/accel.sh@22 -- # case "$var" in 00:32:21.069 12:54:40 -- accel/accel.sh@20 -- # IFS=: 00:32:21.069 12:54:40 -- accel/accel.sh@20 -- # read -r var val 00:32:21.069 12:54:40 -- accel/accel.sh@21 -- # val=0x1 00:32:21.069 12:54:40 -- accel/accel.sh@22 -- # case "$var" in 00:32:21.069 12:54:40 -- accel/accel.sh@20 -- # IFS=: 00:32:21.069 12:54:40 -- accel/accel.sh@20 -- # read -r var val 00:32:21.069 12:54:40 -- accel/accel.sh@21 -- # val= 00:32:21.069 12:54:40 -- accel/accel.sh@22 -- # case "$var" in 00:32:21.069 12:54:40 -- accel/accel.sh@20 -- # IFS=: 00:32:21.069 12:54:40 -- accel/accel.sh@20 -- # read -r var val 00:32:21.069 12:54:40 -- accel/accel.sh@21 -- # val= 00:32:21.069 12:54:40 -- accel/accel.sh@22 -- # case "$var" in 00:32:21.070 12:54:40 -- accel/accel.sh@20 -- # IFS=: 00:32:21.070 12:54:40 -- accel/accel.sh@20 -- # read -r var val 00:32:21.070 12:54:40 -- accel/accel.sh@21 -- # val=copy_crc32c 00:32:21.070 12:54:40 -- accel/accel.sh@22 -- # case "$var" in 00:32:21.070 12:54:40 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:32:21.070 12:54:40 -- accel/accel.sh@20 -- # IFS=: 00:32:21.070 12:54:40 -- accel/accel.sh@20 -- # read -r var val 00:32:21.070 12:54:40 -- accel/accel.sh@21 -- # val=0 00:32:21.070 12:54:40 -- accel/accel.sh@22 -- # case "$var" in 00:32:21.070 12:54:40 -- accel/accel.sh@20 -- # IFS=: 00:32:21.070 12:54:40 -- accel/accel.sh@20 -- # read -r var val 00:32:21.070 12:54:40 -- accel/accel.sh@21 -- # val='4096 bytes' 00:32:21.070 12:54:40 -- accel/accel.sh@22 -- # case "$var" in 00:32:21.070 12:54:40 -- accel/accel.sh@20 -- # IFS=: 00:32:21.070 12:54:40 -- accel/accel.sh@20 -- # read -r var val 00:32:21.070 12:54:40 -- accel/accel.sh@21 -- # val='8192 bytes' 00:32:21.070 12:54:40 -- accel/accel.sh@22 -- # case "$var" in 00:32:21.070 12:54:40 -- accel/accel.sh@20 -- # IFS=: 00:32:21.070 12:54:40 -- accel/accel.sh@20 -- # read -r var val 00:32:21.070 12:54:40 -- accel/accel.sh@21 -- # val= 00:32:21.070 12:54:40 -- accel/accel.sh@22 -- # case "$var" in 00:32:21.070 12:54:40 -- accel/accel.sh@20 -- # IFS=: 00:32:21.070 12:54:40 -- accel/accel.sh@20 -- # read -r var val 00:32:21.070 12:54:40 -- accel/accel.sh@21 -- # val=software 00:32:21.070 12:54:40 -- accel/accel.sh@22 -- # case "$var" in 00:32:21.070 12:54:40 -- accel/accel.sh@23 -- # accel_module=software 00:32:21.070 12:54:40 -- accel/accel.sh@20 -- # IFS=: 00:32:21.070 12:54:40 -- accel/accel.sh@20 -- # read -r var val 00:32:21.070 12:54:40 -- accel/accel.sh@21 -- # val=32 00:32:21.070 12:54:40 -- accel/accel.sh@22 -- # case "$var" in 00:32:21.070 12:54:40 -- accel/accel.sh@20 -- # IFS=: 00:32:21.070 12:54:40 -- accel/accel.sh@20 -- # read -r var val 00:32:21.070 12:54:40 -- accel/accel.sh@21 -- # val=32 00:32:21.070 12:54:40 -- accel/accel.sh@22 -- # case "$var" in 00:32:21.070 12:54:40 -- accel/accel.sh@20 -- # IFS=: 00:32:21.070 12:54:40 -- accel/accel.sh@20 -- # read -r var val 00:32:21.070 12:54:40 -- accel/accel.sh@21 -- # val=1 00:32:21.070 12:54:40 -- accel/accel.sh@22 -- # case "$var" in 00:32:21.070 12:54:40 -- accel/accel.sh@20 -- # IFS=: 00:32:21.070 12:54:40 -- accel/accel.sh@20 -- # read -r var val 00:32:21.070 12:54:40 -- accel/accel.sh@21 -- # val='1 seconds' 00:32:21.070 12:54:40 -- accel/accel.sh@22 -- # case "$var" in 00:32:21.070 12:54:40 -- accel/accel.sh@20 -- # IFS=: 00:32:21.070 12:54:40 -- accel/accel.sh@20 -- # read -r var val 00:32:21.070 12:54:40 -- accel/accel.sh@21 -- # val=Yes 00:32:21.070 12:54:40 -- accel/accel.sh@22 -- # case "$var" in 00:32:21.070 12:54:40 -- accel/accel.sh@20 -- # IFS=: 00:32:21.070 12:54:40 -- accel/accel.sh@20 -- # read -r var val 00:32:21.070 12:54:40 -- accel/accel.sh@21 -- # val= 00:32:21.070 12:54:40 -- accel/accel.sh@22 -- # case "$var" in 00:32:21.070 12:54:40 -- accel/accel.sh@20 -- # IFS=: 00:32:21.070 12:54:40 -- accel/accel.sh@20 -- # read -r var val 00:32:21.070 12:54:40 -- accel/accel.sh@21 -- # val= 00:32:21.070 12:54:40 -- accel/accel.sh@22 -- # case "$var" in 00:32:21.070 12:54:40 -- accel/accel.sh@20 -- # IFS=: 00:32:21.070 12:54:40 -- accel/accel.sh@20 -- # read -r var val 00:32:22.456 12:54:41 -- accel/accel.sh@21 -- # val= 00:32:22.456 12:54:41 -- accel/accel.sh@22 -- # case "$var" in 00:32:22.456 12:54:41 -- accel/accel.sh@20 -- # IFS=: 00:32:22.456 12:54:41 -- accel/accel.sh@20 -- # read -r var val 00:32:22.456 12:54:41 -- accel/accel.sh@21 -- # val= 00:32:22.456 12:54:41 -- accel/accel.sh@22 -- # case "$var" in 00:32:22.456 12:54:41 -- accel/accel.sh@20 -- # IFS=: 00:32:22.456 12:54:41 -- accel/accel.sh@20 -- # read -r var val 00:32:22.456 12:54:41 -- accel/accel.sh@21 -- # val= 00:32:22.456 12:54:41 -- accel/accel.sh@22 -- # case "$var" in 00:32:22.456 12:54:41 -- accel/accel.sh@20 -- # IFS=: 00:32:22.456 12:54:41 -- accel/accel.sh@20 -- # read -r var val 00:32:22.456 12:54:41 -- accel/accel.sh@21 -- # val= 00:32:22.456 12:54:41 -- accel/accel.sh@22 -- # case "$var" in 00:32:22.456 12:54:41 -- accel/accel.sh@20 -- # IFS=: 00:32:22.456 12:54:41 -- accel/accel.sh@20 -- # read -r var val 00:32:22.456 12:54:41 -- accel/accel.sh@21 -- # val= 00:32:22.456 12:54:41 -- accel/accel.sh@22 -- # case "$var" in 00:32:22.456 12:54:41 -- accel/accel.sh@20 -- # IFS=: 00:32:22.456 12:54:41 -- accel/accel.sh@20 -- # read -r var val 00:32:22.456 12:54:41 -- accel/accel.sh@21 -- # val= 00:32:22.456 12:54:41 -- accel/accel.sh@22 -- # case "$var" in 00:32:22.456 12:54:41 -- accel/accel.sh@20 -- # IFS=: 00:32:22.456 12:54:41 -- accel/accel.sh@20 -- # read -r var val 00:32:22.456 12:54:41 -- accel/accel.sh@28 -- # [[ -n software ]] 00:32:22.456 12:54:41 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:32:22.456 12:54:41 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:22.456 00:32:22.456 real 0m2.966s 00:32:22.456 user 0m2.521s 00:32:22.456 sys 0m0.237s 00:32:22.456 12:54:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:22.456 ************************************ 00:32:22.456 12:54:41 -- common/autotest_common.sh@10 -- # set +x 00:32:22.456 END TEST accel_copy_crc32c_C2 00:32:22.456 ************************************ 00:32:22.456 12:54:41 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:32:22.456 12:54:41 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:32:22.456 12:54:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:22.456 12:54:41 -- common/autotest_common.sh@10 -- # set +x 00:32:22.456 ************************************ 00:32:22.456 START TEST accel_dualcast 00:32:22.456 ************************************ 00:32:22.456 12:54:41 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dualcast -y 00:32:22.456 12:54:41 -- accel/accel.sh@16 -- # local accel_opc 00:32:22.456 12:54:41 -- accel/accel.sh@17 -- # local accel_module 00:32:22.456 12:54:41 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:32:22.456 12:54:41 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:32:22.456 12:54:41 -- accel/accel.sh@12 -- # build_accel_config 00:32:22.456 12:54:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:32:22.456 12:54:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:32:22.456 12:54:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:32:22.456 12:54:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:32:22.456 12:54:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:32:22.456 12:54:41 -- accel/accel.sh@41 -- # local IFS=, 00:32:22.456 12:54:41 -- accel/accel.sh@42 -- # jq -r . 00:32:22.456 [2024-07-22 12:54:41.570837] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:32:22.456 [2024-07-22 12:54:41.570937] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70468 ] 00:32:22.456 [2024-07-22 12:54:41.705998] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:22.456 [2024-07-22 12:54:41.802487] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:23.892 12:54:43 -- accel/accel.sh@18 -- # out=' 00:32:23.892 SPDK Configuration: 00:32:23.892 Core mask: 0x1 00:32:23.892 00:32:23.892 Accel Perf Configuration: 00:32:23.892 Workload Type: dualcast 00:32:23.892 Transfer size: 4096 bytes 00:32:23.892 Vector count 1 00:32:23.892 Module: software 00:32:23.892 Queue depth: 32 00:32:23.892 Allocate depth: 32 00:32:23.892 # threads/core: 1 00:32:23.892 Run time: 1 seconds 00:32:23.892 Verify: Yes 00:32:23.892 00:32:23.892 Running for 1 seconds... 00:32:23.892 00:32:23.892 Core,Thread Transfers Bandwidth Failed Miscompares 00:32:23.892 ------------------------------------------------------------------------------------ 00:32:23.892 0,0 342400/s 1337 MiB/s 0 0 00:32:23.892 ==================================================================================== 00:32:23.892 Total 342400/s 1337 MiB/s 0 0' 00:32:23.892 12:54:43 -- accel/accel.sh@20 -- # IFS=: 00:32:23.892 12:54:43 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:32:23.892 12:54:43 -- accel/accel.sh@20 -- # read -r var val 00:32:23.892 12:54:43 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:32:23.892 12:54:43 -- accel/accel.sh@12 -- # build_accel_config 00:32:23.892 12:54:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:32:23.892 12:54:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:32:23.892 12:54:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:32:23.893 12:54:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:32:23.893 12:54:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:32:23.893 12:54:43 -- accel/accel.sh@41 -- # local IFS=, 00:32:23.893 12:54:43 -- accel/accel.sh@42 -- # jq -r . 00:32:23.893 [2024-07-22 12:54:43.032630] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:32:23.893 [2024-07-22 12:54:43.032707] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70494 ] 00:32:23.893 [2024-07-22 12:54:43.162826] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:23.893 [2024-07-22 12:54:43.255712] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:23.893 12:54:43 -- accel/accel.sh@21 -- # val= 00:32:23.893 12:54:43 -- accel/accel.sh@22 -- # case "$var" in 00:32:23.893 12:54:43 -- accel/accel.sh@20 -- # IFS=: 00:32:23.893 12:54:43 -- accel/accel.sh@20 -- # read -r var val 00:32:23.893 12:54:43 -- accel/accel.sh@21 -- # val= 00:32:23.893 12:54:43 -- accel/accel.sh@22 -- # case "$var" in 00:32:23.893 12:54:43 -- accel/accel.sh@20 -- # IFS=: 00:32:23.893 12:54:43 -- accel/accel.sh@20 -- # read -r var val 00:32:23.893 12:54:43 -- accel/accel.sh@21 -- # val=0x1 00:32:23.893 12:54:43 -- accel/accel.sh@22 -- # case "$var" in 00:32:23.893 12:54:43 -- accel/accel.sh@20 -- # IFS=: 00:32:23.893 12:54:43 -- accel/accel.sh@20 -- # read -r var val 00:32:23.893 12:54:43 -- accel/accel.sh@21 -- # val= 00:32:23.893 12:54:43 -- accel/accel.sh@22 -- # case "$var" in 00:32:23.893 12:54:43 -- accel/accel.sh@20 -- # IFS=: 00:32:23.893 12:54:43 -- accel/accel.sh@20 -- # read -r var val 00:32:23.893 12:54:43 -- accel/accel.sh@21 -- # val= 00:32:23.893 12:54:43 -- accel/accel.sh@22 -- # case "$var" in 00:32:23.893 12:54:43 -- accel/accel.sh@20 -- # IFS=: 00:32:23.893 12:54:43 -- accel/accel.sh@20 -- # read -r var val 00:32:23.893 12:54:43 -- accel/accel.sh@21 -- # val=dualcast 00:32:23.893 12:54:43 -- accel/accel.sh@22 -- # case "$var" in 00:32:23.893 12:54:43 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:32:23.893 12:54:43 -- accel/accel.sh@20 -- # IFS=: 00:32:23.893 12:54:43 -- accel/accel.sh@20 -- # read -r var val 00:32:23.893 12:54:43 -- accel/accel.sh@21 -- # val='4096 bytes' 00:32:23.893 12:54:43 -- accel/accel.sh@22 -- # case "$var" in 00:32:23.893 12:54:43 -- accel/accel.sh@20 -- # IFS=: 00:32:24.151 12:54:43 -- accel/accel.sh@20 -- # read -r var val 00:32:24.151 12:54:43 -- accel/accel.sh@21 -- # val= 00:32:24.151 12:54:43 -- accel/accel.sh@22 -- # case "$var" in 00:32:24.151 12:54:43 -- accel/accel.sh@20 -- # IFS=: 00:32:24.151 12:54:43 -- accel/accel.sh@20 -- # read -r var val 00:32:24.151 12:54:43 -- accel/accel.sh@21 -- # val=software 00:32:24.151 12:54:43 -- accel/accel.sh@22 -- # case "$var" in 00:32:24.151 12:54:43 -- accel/accel.sh@23 -- # accel_module=software 00:32:24.151 12:54:43 -- accel/accel.sh@20 -- # IFS=: 00:32:24.151 12:54:43 -- accel/accel.sh@20 -- # read -r var val 00:32:24.151 12:54:43 -- accel/accel.sh@21 -- # val=32 00:32:24.151 12:54:43 -- accel/accel.sh@22 -- # case "$var" in 00:32:24.151 12:54:43 -- accel/accel.sh@20 -- # IFS=: 00:32:24.151 12:54:43 -- accel/accel.sh@20 -- # read -r var val 00:32:24.151 12:54:43 -- accel/accel.sh@21 -- # val=32 00:32:24.151 12:54:43 -- accel/accel.sh@22 -- # case "$var" in 00:32:24.151 12:54:43 -- accel/accel.sh@20 -- # IFS=: 00:32:24.151 12:54:43 -- accel/accel.sh@20 -- # read -r var val 00:32:24.151 12:54:43 -- accel/accel.sh@21 -- # val=1 00:32:24.151 12:54:43 -- accel/accel.sh@22 -- # case "$var" in 00:32:24.151 12:54:43 -- accel/accel.sh@20 -- # IFS=: 00:32:24.151 12:54:43 -- accel/accel.sh@20 -- # read -r var val 00:32:24.151 12:54:43 -- accel/accel.sh@21 -- # val='1 seconds' 00:32:24.151 12:54:43 -- accel/accel.sh@22 -- # case "$var" in 00:32:24.151 12:54:43 -- accel/accel.sh@20 -- # IFS=: 00:32:24.151 12:54:43 -- accel/accel.sh@20 -- # read -r var val 00:32:24.151 12:54:43 -- accel/accel.sh@21 -- # val=Yes 00:32:24.151 12:54:43 -- accel/accel.sh@22 -- # case "$var" in 00:32:24.151 12:54:43 -- accel/accel.sh@20 -- # IFS=: 00:32:24.151 12:54:43 -- accel/accel.sh@20 -- # read -r var val 00:32:24.151 12:54:43 -- accel/accel.sh@21 -- # val= 00:32:24.151 12:54:43 -- accel/accel.sh@22 -- # case "$var" in 00:32:24.151 12:54:43 -- accel/accel.sh@20 -- # IFS=: 00:32:24.151 12:54:43 -- accel/accel.sh@20 -- # read -r var val 00:32:24.151 12:54:43 -- accel/accel.sh@21 -- # val= 00:32:24.151 12:54:43 -- accel/accel.sh@22 -- # case "$var" in 00:32:24.151 12:54:43 -- accel/accel.sh@20 -- # IFS=: 00:32:24.151 12:54:43 -- accel/accel.sh@20 -- # read -r var val 00:32:25.085 12:54:44 -- accel/accel.sh@21 -- # val= 00:32:25.085 12:54:44 -- accel/accel.sh@22 -- # case "$var" in 00:32:25.085 12:54:44 -- accel/accel.sh@20 -- # IFS=: 00:32:25.085 12:54:44 -- accel/accel.sh@20 -- # read -r var val 00:32:25.085 12:54:44 -- accel/accel.sh@21 -- # val= 00:32:25.085 12:54:44 -- accel/accel.sh@22 -- # case "$var" in 00:32:25.085 12:54:44 -- accel/accel.sh@20 -- # IFS=: 00:32:25.085 12:54:44 -- accel/accel.sh@20 -- # read -r var val 00:32:25.085 12:54:44 -- accel/accel.sh@21 -- # val= 00:32:25.085 12:54:44 -- accel/accel.sh@22 -- # case "$var" in 00:32:25.085 12:54:44 -- accel/accel.sh@20 -- # IFS=: 00:32:25.085 12:54:44 -- accel/accel.sh@20 -- # read -r var val 00:32:25.085 12:54:44 -- accel/accel.sh@21 -- # val= 00:32:25.085 12:54:44 -- accel/accel.sh@22 -- # case "$var" in 00:32:25.085 12:54:44 -- accel/accel.sh@20 -- # IFS=: 00:32:25.085 12:54:44 -- accel/accel.sh@20 -- # read -r var val 00:32:25.085 12:54:44 -- accel/accel.sh@21 -- # val= 00:32:25.085 12:54:44 -- accel/accel.sh@22 -- # case "$var" in 00:32:25.085 12:54:44 -- accel/accel.sh@20 -- # IFS=: 00:32:25.085 12:54:44 -- accel/accel.sh@20 -- # read -r var val 00:32:25.085 12:54:44 -- accel/accel.sh@21 -- # val= 00:32:25.085 12:54:44 -- accel/accel.sh@22 -- # case "$var" in 00:32:25.085 12:54:44 -- accel/accel.sh@20 -- # IFS=: 00:32:25.085 12:54:44 -- accel/accel.sh@20 -- # read -r var val 00:32:25.085 12:54:44 -- accel/accel.sh@28 -- # [[ -n software ]] 00:32:25.085 12:54:44 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:32:25.085 12:54:44 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:25.085 00:32:25.085 real 0m2.924s 00:32:25.085 user 0m2.502s 00:32:25.085 sys 0m0.216s 00:32:25.085 12:54:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:25.085 ************************************ 00:32:25.085 END TEST accel_dualcast 00:32:25.085 12:54:44 -- common/autotest_common.sh@10 -- # set +x 00:32:25.085 ************************************ 00:32:25.085 12:54:44 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:32:25.343 12:54:44 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:32:25.343 12:54:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:25.343 12:54:44 -- common/autotest_common.sh@10 -- # set +x 00:32:25.343 ************************************ 00:32:25.343 START TEST accel_compare 00:32:25.343 ************************************ 00:32:25.343 12:54:44 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compare -y 00:32:25.343 12:54:44 -- accel/accel.sh@16 -- # local accel_opc 00:32:25.343 12:54:44 -- accel/accel.sh@17 -- # local accel_module 00:32:25.343 12:54:44 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:32:25.343 12:54:44 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:32:25.343 12:54:44 -- accel/accel.sh@12 -- # build_accel_config 00:32:25.343 12:54:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:32:25.343 12:54:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:32:25.343 12:54:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:32:25.343 12:54:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:32:25.343 12:54:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:32:25.343 12:54:44 -- accel/accel.sh@41 -- # local IFS=, 00:32:25.343 12:54:44 -- accel/accel.sh@42 -- # jq -r . 00:32:25.343 [2024-07-22 12:54:44.535465] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:32:25.343 [2024-07-22 12:54:44.535560] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70523 ] 00:32:25.343 [2024-07-22 12:54:44.673024] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:25.601 [2024-07-22 12:54:44.766943] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:26.975 12:54:45 -- accel/accel.sh@18 -- # out=' 00:32:26.975 SPDK Configuration: 00:32:26.975 Core mask: 0x1 00:32:26.975 00:32:26.975 Accel Perf Configuration: 00:32:26.975 Workload Type: compare 00:32:26.975 Transfer size: 4096 bytes 00:32:26.975 Vector count 1 00:32:26.975 Module: software 00:32:26.975 Queue depth: 32 00:32:26.975 Allocate depth: 32 00:32:26.975 # threads/core: 1 00:32:26.975 Run time: 1 seconds 00:32:26.975 Verify: Yes 00:32:26.975 00:32:26.975 Running for 1 seconds... 00:32:26.975 00:32:26.975 Core,Thread Transfers Bandwidth Failed Miscompares 00:32:26.975 ------------------------------------------------------------------------------------ 00:32:26.975 0,0 436832/s 1706 MiB/s 0 0 00:32:26.975 ==================================================================================== 00:32:26.975 Total 436832/s 1706 MiB/s 0 0' 00:32:26.975 12:54:45 -- accel/accel.sh@20 -- # IFS=: 00:32:26.975 12:54:45 -- accel/accel.sh@20 -- # read -r var val 00:32:26.975 12:54:45 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:32:26.975 12:54:45 -- accel/accel.sh@12 -- # build_accel_config 00:32:26.975 12:54:45 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:32:26.976 12:54:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:32:26.976 12:54:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:32:26.976 12:54:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:32:26.976 12:54:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:32:26.976 12:54:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:32:26.976 12:54:45 -- accel/accel.sh@41 -- # local IFS=, 00:32:26.976 12:54:45 -- accel/accel.sh@42 -- # jq -r . 00:32:26.976 [2024-07-22 12:54:46.008619] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:32:26.976 [2024-07-22 12:54:46.008710] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70548 ] 00:32:26.976 [2024-07-22 12:54:46.145911] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:26.976 [2024-07-22 12:54:46.239990] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:26.976 12:54:46 -- accel/accel.sh@21 -- # val= 00:32:26.976 12:54:46 -- accel/accel.sh@22 -- # case "$var" in 00:32:26.976 12:54:46 -- accel/accel.sh@20 -- # IFS=: 00:32:26.976 12:54:46 -- accel/accel.sh@20 -- # read -r var val 00:32:26.976 12:54:46 -- accel/accel.sh@21 -- # val= 00:32:26.976 12:54:46 -- accel/accel.sh@22 -- # case "$var" in 00:32:26.976 12:54:46 -- accel/accel.sh@20 -- # IFS=: 00:32:26.976 12:54:46 -- accel/accel.sh@20 -- # read -r var val 00:32:26.976 12:54:46 -- accel/accel.sh@21 -- # val=0x1 00:32:26.976 12:54:46 -- accel/accel.sh@22 -- # case "$var" in 00:32:26.976 12:54:46 -- accel/accel.sh@20 -- # IFS=: 00:32:26.976 12:54:46 -- accel/accel.sh@20 -- # read -r var val 00:32:26.976 12:54:46 -- accel/accel.sh@21 -- # val= 00:32:26.976 12:54:46 -- accel/accel.sh@22 -- # case "$var" in 00:32:26.976 12:54:46 -- accel/accel.sh@20 -- # IFS=: 00:32:26.976 12:54:46 -- accel/accel.sh@20 -- # read -r var val 00:32:26.976 12:54:46 -- accel/accel.sh@21 -- # val= 00:32:26.976 12:54:46 -- accel/accel.sh@22 -- # case "$var" in 00:32:26.976 12:54:46 -- accel/accel.sh@20 -- # IFS=: 00:32:26.976 12:54:46 -- accel/accel.sh@20 -- # read -r var val 00:32:26.976 12:54:46 -- accel/accel.sh@21 -- # val=compare 00:32:26.976 12:54:46 -- accel/accel.sh@22 -- # case "$var" in 00:32:26.976 12:54:46 -- accel/accel.sh@24 -- # accel_opc=compare 00:32:26.976 12:54:46 -- accel/accel.sh@20 -- # IFS=: 00:32:26.976 12:54:46 -- accel/accel.sh@20 -- # read -r var val 00:32:26.976 12:54:46 -- accel/accel.sh@21 -- # val='4096 bytes' 00:32:26.976 12:54:46 -- accel/accel.sh@22 -- # case "$var" in 00:32:26.976 12:54:46 -- accel/accel.sh@20 -- # IFS=: 00:32:26.976 12:54:46 -- accel/accel.sh@20 -- # read -r var val 00:32:26.976 12:54:46 -- accel/accel.sh@21 -- # val= 00:32:26.976 12:54:46 -- accel/accel.sh@22 -- # case "$var" in 00:32:26.976 12:54:46 -- accel/accel.sh@20 -- # IFS=: 00:32:26.976 12:54:46 -- accel/accel.sh@20 -- # read -r var val 00:32:26.976 12:54:46 -- accel/accel.sh@21 -- # val=software 00:32:26.976 12:54:46 -- accel/accel.sh@22 -- # case "$var" in 00:32:26.976 12:54:46 -- accel/accel.sh@23 -- # accel_module=software 00:32:26.976 12:54:46 -- accel/accel.sh@20 -- # IFS=: 00:32:26.976 12:54:46 -- accel/accel.sh@20 -- # read -r var val 00:32:26.976 12:54:46 -- accel/accel.sh@21 -- # val=32 00:32:26.976 12:54:46 -- accel/accel.sh@22 -- # case "$var" in 00:32:26.976 12:54:46 -- accel/accel.sh@20 -- # IFS=: 00:32:26.976 12:54:46 -- accel/accel.sh@20 -- # read -r var val 00:32:26.976 12:54:46 -- accel/accel.sh@21 -- # val=32 00:32:26.976 12:54:46 -- accel/accel.sh@22 -- # case "$var" in 00:32:26.976 12:54:46 -- accel/accel.sh@20 -- # IFS=: 00:32:26.976 12:54:46 -- accel/accel.sh@20 -- # read -r var val 00:32:26.976 12:54:46 -- accel/accel.sh@21 -- # val=1 00:32:26.976 12:54:46 -- accel/accel.sh@22 -- # case "$var" in 00:32:26.976 12:54:46 -- accel/accel.sh@20 -- # IFS=: 00:32:26.976 12:54:46 -- accel/accel.sh@20 -- # read -r var val 00:32:26.976 12:54:46 -- accel/accel.sh@21 -- # val='1 seconds' 00:32:26.976 12:54:46 -- accel/accel.sh@22 -- # case "$var" in 00:32:26.976 12:54:46 -- accel/accel.sh@20 -- # IFS=: 00:32:26.976 12:54:46 -- accel/accel.sh@20 -- # read -r var val 00:32:26.976 12:54:46 -- accel/accel.sh@21 -- # val=Yes 00:32:26.976 12:54:46 -- accel/accel.sh@22 -- # case "$var" in 00:32:26.976 12:54:46 -- accel/accel.sh@20 -- # IFS=: 00:32:26.976 12:54:46 -- accel/accel.sh@20 -- # read -r var val 00:32:26.976 12:54:46 -- accel/accel.sh@21 -- # val= 00:32:26.976 12:54:46 -- accel/accel.sh@22 -- # case "$var" in 00:32:26.976 12:54:46 -- accel/accel.sh@20 -- # IFS=: 00:32:26.976 12:54:46 -- accel/accel.sh@20 -- # read -r var val 00:32:26.976 12:54:46 -- accel/accel.sh@21 -- # val= 00:32:26.976 12:54:46 -- accel/accel.sh@22 -- # case "$var" in 00:32:26.976 12:54:46 -- accel/accel.sh@20 -- # IFS=: 00:32:26.976 12:54:46 -- accel/accel.sh@20 -- # read -r var val 00:32:28.352 12:54:47 -- accel/accel.sh@21 -- # val= 00:32:28.352 12:54:47 -- accel/accel.sh@22 -- # case "$var" in 00:32:28.352 12:54:47 -- accel/accel.sh@20 -- # IFS=: 00:32:28.352 12:54:47 -- accel/accel.sh@20 -- # read -r var val 00:32:28.352 12:54:47 -- accel/accel.sh@21 -- # val= 00:32:28.352 12:54:47 -- accel/accel.sh@22 -- # case "$var" in 00:32:28.352 12:54:47 -- accel/accel.sh@20 -- # IFS=: 00:32:28.352 12:54:47 -- accel/accel.sh@20 -- # read -r var val 00:32:28.352 12:54:47 -- accel/accel.sh@21 -- # val= 00:32:28.352 12:54:47 -- accel/accel.sh@22 -- # case "$var" in 00:32:28.352 12:54:47 -- accel/accel.sh@20 -- # IFS=: 00:32:28.352 12:54:47 -- accel/accel.sh@20 -- # read -r var val 00:32:28.352 12:54:47 -- accel/accel.sh@21 -- # val= 00:32:28.352 12:54:47 -- accel/accel.sh@22 -- # case "$var" in 00:32:28.352 12:54:47 -- accel/accel.sh@20 -- # IFS=: 00:32:28.352 12:54:47 -- accel/accel.sh@20 -- # read -r var val 00:32:28.352 12:54:47 -- accel/accel.sh@21 -- # val= 00:32:28.352 12:54:47 -- accel/accel.sh@22 -- # case "$var" in 00:32:28.352 12:54:47 -- accel/accel.sh@20 -- # IFS=: 00:32:28.352 12:54:47 -- accel/accel.sh@20 -- # read -r var val 00:32:28.352 12:54:47 -- accel/accel.sh@21 -- # val= 00:32:28.352 12:54:47 -- accel/accel.sh@22 -- # case "$var" in 00:32:28.352 12:54:47 -- accel/accel.sh@20 -- # IFS=: 00:32:28.352 12:54:47 -- accel/accel.sh@20 -- # read -r var val 00:32:28.352 12:54:47 -- accel/accel.sh@28 -- # [[ -n software ]] 00:32:28.353 12:54:47 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:32:28.353 12:54:47 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:28.353 00:32:28.353 real 0m2.957s 00:32:28.353 user 0m2.528s 00:32:28.353 sys 0m0.223s 00:32:28.353 12:54:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:28.353 12:54:47 -- common/autotest_common.sh@10 -- # set +x 00:32:28.353 ************************************ 00:32:28.353 END TEST accel_compare 00:32:28.353 ************************************ 00:32:28.353 12:54:47 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:32:28.353 12:54:47 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:32:28.353 12:54:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:28.353 12:54:47 -- common/autotest_common.sh@10 -- # set +x 00:32:28.353 ************************************ 00:32:28.353 START TEST accel_xor 00:32:28.353 ************************************ 00:32:28.353 12:54:47 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y 00:32:28.353 12:54:47 -- accel/accel.sh@16 -- # local accel_opc 00:32:28.353 12:54:47 -- accel/accel.sh@17 -- # local accel_module 00:32:28.353 12:54:47 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:32:28.353 12:54:47 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:32:28.353 12:54:47 -- accel/accel.sh@12 -- # build_accel_config 00:32:28.353 12:54:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:32:28.353 12:54:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:32:28.353 12:54:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:32:28.353 12:54:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:32:28.353 12:54:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:32:28.353 12:54:47 -- accel/accel.sh@41 -- # local IFS=, 00:32:28.353 12:54:47 -- accel/accel.sh@42 -- # jq -r . 00:32:28.353 [2024-07-22 12:54:47.543105] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:32:28.353 [2024-07-22 12:54:47.543199] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70577 ] 00:32:28.353 [2024-07-22 12:54:47.674551] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:28.353 [2024-07-22 12:54:47.766420] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:29.730 12:54:48 -- accel/accel.sh@18 -- # out=' 00:32:29.730 SPDK Configuration: 00:32:29.730 Core mask: 0x1 00:32:29.730 00:32:29.730 Accel Perf Configuration: 00:32:29.730 Workload Type: xor 00:32:29.730 Source buffers: 2 00:32:29.730 Transfer size: 4096 bytes 00:32:29.730 Vector count 1 00:32:29.730 Module: software 00:32:29.730 Queue depth: 32 00:32:29.730 Allocate depth: 32 00:32:29.730 # threads/core: 1 00:32:29.730 Run time: 1 seconds 00:32:29.730 Verify: Yes 00:32:29.730 00:32:29.730 Running for 1 seconds... 00:32:29.730 00:32:29.730 Core,Thread Transfers Bandwidth Failed Miscompares 00:32:29.730 ------------------------------------------------------------------------------------ 00:32:29.730 0,0 250080/s 976 MiB/s 0 0 00:32:29.730 ==================================================================================== 00:32:29.730 Total 250080/s 976 MiB/s 0 0' 00:32:29.730 12:54:48 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:32:29.730 12:54:48 -- accel/accel.sh@20 -- # IFS=: 00:32:29.730 12:54:48 -- accel/accel.sh@20 -- # read -r var val 00:32:29.730 12:54:48 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:32:29.730 12:54:48 -- accel/accel.sh@12 -- # build_accel_config 00:32:29.730 12:54:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:32:29.730 12:54:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:32:29.730 12:54:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:32:29.730 12:54:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:32:29.730 12:54:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:32:29.730 12:54:48 -- accel/accel.sh@41 -- # local IFS=, 00:32:29.730 12:54:48 -- accel/accel.sh@42 -- # jq -r . 00:32:29.730 [2024-07-22 12:54:48.999613] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:32:29.730 [2024-07-22 12:54:48.999716] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70597 ] 00:32:29.730 [2024-07-22 12:54:49.130989] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:29.989 [2024-07-22 12:54:49.208887] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:29.989 12:54:49 -- accel/accel.sh@21 -- # val= 00:32:29.989 12:54:49 -- accel/accel.sh@22 -- # case "$var" in 00:32:29.989 12:54:49 -- accel/accel.sh@20 -- # IFS=: 00:32:29.989 12:54:49 -- accel/accel.sh@20 -- # read -r var val 00:32:29.989 12:54:49 -- accel/accel.sh@21 -- # val= 00:32:29.989 12:54:49 -- accel/accel.sh@22 -- # case "$var" in 00:32:29.989 12:54:49 -- accel/accel.sh@20 -- # IFS=: 00:32:29.989 12:54:49 -- accel/accel.sh@20 -- # read -r var val 00:32:29.989 12:54:49 -- accel/accel.sh@21 -- # val=0x1 00:32:29.989 12:54:49 -- accel/accel.sh@22 -- # case "$var" in 00:32:29.989 12:54:49 -- accel/accel.sh@20 -- # IFS=: 00:32:29.989 12:54:49 -- accel/accel.sh@20 -- # read -r var val 00:32:29.989 12:54:49 -- accel/accel.sh@21 -- # val= 00:32:29.989 12:54:49 -- accel/accel.sh@22 -- # case "$var" in 00:32:29.989 12:54:49 -- accel/accel.sh@20 -- # IFS=: 00:32:29.989 12:54:49 -- accel/accel.sh@20 -- # read -r var val 00:32:29.989 12:54:49 -- accel/accel.sh@21 -- # val= 00:32:29.989 12:54:49 -- accel/accel.sh@22 -- # case "$var" in 00:32:29.989 12:54:49 -- accel/accel.sh@20 -- # IFS=: 00:32:29.989 12:54:49 -- accel/accel.sh@20 -- # read -r var val 00:32:29.989 12:54:49 -- accel/accel.sh@21 -- # val=xor 00:32:29.989 12:54:49 -- accel/accel.sh@22 -- # case "$var" in 00:32:29.989 12:54:49 -- accel/accel.sh@24 -- # accel_opc=xor 00:32:29.989 12:54:49 -- accel/accel.sh@20 -- # IFS=: 00:32:29.989 12:54:49 -- accel/accel.sh@20 -- # read -r var val 00:32:29.989 12:54:49 -- accel/accel.sh@21 -- # val=2 00:32:29.989 12:54:49 -- accel/accel.sh@22 -- # case "$var" in 00:32:29.989 12:54:49 -- accel/accel.sh@20 -- # IFS=: 00:32:29.989 12:54:49 -- accel/accel.sh@20 -- # read -r var val 00:32:29.989 12:54:49 -- accel/accel.sh@21 -- # val='4096 bytes' 00:32:29.989 12:54:49 -- accel/accel.sh@22 -- # case "$var" in 00:32:29.989 12:54:49 -- accel/accel.sh@20 -- # IFS=: 00:32:29.989 12:54:49 -- accel/accel.sh@20 -- # read -r var val 00:32:29.989 12:54:49 -- accel/accel.sh@21 -- # val= 00:32:29.989 12:54:49 -- accel/accel.sh@22 -- # case "$var" in 00:32:29.989 12:54:49 -- accel/accel.sh@20 -- # IFS=: 00:32:29.989 12:54:49 -- accel/accel.sh@20 -- # read -r var val 00:32:29.989 12:54:49 -- accel/accel.sh@21 -- # val=software 00:32:29.989 12:54:49 -- accel/accel.sh@22 -- # case "$var" in 00:32:29.989 12:54:49 -- accel/accel.sh@23 -- # accel_module=software 00:32:29.989 12:54:49 -- accel/accel.sh@20 -- # IFS=: 00:32:29.989 12:54:49 -- accel/accel.sh@20 -- # read -r var val 00:32:29.989 12:54:49 -- accel/accel.sh@21 -- # val=32 00:32:29.989 12:54:49 -- accel/accel.sh@22 -- # case "$var" in 00:32:29.989 12:54:49 -- accel/accel.sh@20 -- # IFS=: 00:32:29.989 12:54:49 -- accel/accel.sh@20 -- # read -r var val 00:32:29.989 12:54:49 -- accel/accel.sh@21 -- # val=32 00:32:29.989 12:54:49 -- accel/accel.sh@22 -- # case "$var" in 00:32:29.989 12:54:49 -- accel/accel.sh@20 -- # IFS=: 00:32:29.989 12:54:49 -- accel/accel.sh@20 -- # read -r var val 00:32:29.989 12:54:49 -- accel/accel.sh@21 -- # val=1 00:32:29.989 12:54:49 -- accel/accel.sh@22 -- # case "$var" in 00:32:29.989 12:54:49 -- accel/accel.sh@20 -- # IFS=: 00:32:29.989 12:54:49 -- accel/accel.sh@20 -- # read -r var val 00:32:29.989 12:54:49 -- accel/accel.sh@21 -- # val='1 seconds' 00:32:29.989 12:54:49 -- accel/accel.sh@22 -- # case "$var" in 00:32:29.989 12:54:49 -- accel/accel.sh@20 -- # IFS=: 00:32:29.989 12:54:49 -- accel/accel.sh@20 -- # read -r var val 00:32:29.989 12:54:49 -- accel/accel.sh@21 -- # val=Yes 00:32:29.989 12:54:49 -- accel/accel.sh@22 -- # case "$var" in 00:32:29.989 12:54:49 -- accel/accel.sh@20 -- # IFS=: 00:32:29.989 12:54:49 -- accel/accel.sh@20 -- # read -r var val 00:32:29.989 12:54:49 -- accel/accel.sh@21 -- # val= 00:32:29.989 12:54:49 -- accel/accel.sh@22 -- # case "$var" in 00:32:29.989 12:54:49 -- accel/accel.sh@20 -- # IFS=: 00:32:29.989 12:54:49 -- accel/accel.sh@20 -- # read -r var val 00:32:29.989 12:54:49 -- accel/accel.sh@21 -- # val= 00:32:29.989 12:54:49 -- accel/accel.sh@22 -- # case "$var" in 00:32:29.989 12:54:49 -- accel/accel.sh@20 -- # IFS=: 00:32:29.989 12:54:49 -- accel/accel.sh@20 -- # read -r var val 00:32:31.364 12:54:50 -- accel/accel.sh@21 -- # val= 00:32:31.364 12:54:50 -- accel/accel.sh@22 -- # case "$var" in 00:32:31.364 12:54:50 -- accel/accel.sh@20 -- # IFS=: 00:32:31.364 12:54:50 -- accel/accel.sh@20 -- # read -r var val 00:32:31.364 12:54:50 -- accel/accel.sh@21 -- # val= 00:32:31.364 12:54:50 -- accel/accel.sh@22 -- # case "$var" in 00:32:31.364 12:54:50 -- accel/accel.sh@20 -- # IFS=: 00:32:31.364 12:54:50 -- accel/accel.sh@20 -- # read -r var val 00:32:31.364 12:54:50 -- accel/accel.sh@21 -- # val= 00:32:31.364 12:54:50 -- accel/accel.sh@22 -- # case "$var" in 00:32:31.364 12:54:50 -- accel/accel.sh@20 -- # IFS=: 00:32:31.364 12:54:50 -- accel/accel.sh@20 -- # read -r var val 00:32:31.364 12:54:50 -- accel/accel.sh@21 -- # val= 00:32:31.364 12:54:50 -- accel/accel.sh@22 -- # case "$var" in 00:32:31.364 12:54:50 -- accel/accel.sh@20 -- # IFS=: 00:32:31.364 12:54:50 -- accel/accel.sh@20 -- # read -r var val 00:32:31.364 12:54:50 -- accel/accel.sh@21 -- # val= 00:32:31.364 12:54:50 -- accel/accel.sh@22 -- # case "$var" in 00:32:31.364 12:54:50 -- accel/accel.sh@20 -- # IFS=: 00:32:31.364 12:54:50 -- accel/accel.sh@20 -- # read -r var val 00:32:31.364 12:54:50 -- accel/accel.sh@21 -- # val= 00:32:31.364 12:54:50 -- accel/accel.sh@22 -- # case "$var" in 00:32:31.364 12:54:50 -- accel/accel.sh@20 -- # IFS=: 00:32:31.364 12:54:50 -- accel/accel.sh@20 -- # read -r var val 00:32:31.364 12:54:50 -- accel/accel.sh@28 -- # [[ -n software ]] 00:32:31.365 12:54:50 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:32:31.365 12:54:50 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:31.365 00:32:31.365 real 0m2.908s 00:32:31.365 user 0m2.467s 00:32:31.365 sys 0m0.240s 00:32:31.365 12:54:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:31.365 12:54:50 -- common/autotest_common.sh@10 -- # set +x 00:32:31.365 ************************************ 00:32:31.365 END TEST accel_xor 00:32:31.365 ************************************ 00:32:31.365 12:54:50 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:32:31.365 12:54:50 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:32:31.365 12:54:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:31.365 12:54:50 -- common/autotest_common.sh@10 -- # set +x 00:32:31.365 ************************************ 00:32:31.365 START TEST accel_xor 00:32:31.365 ************************************ 00:32:31.365 12:54:50 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y -x 3 00:32:31.365 12:54:50 -- accel/accel.sh@16 -- # local accel_opc 00:32:31.365 12:54:50 -- accel/accel.sh@17 -- # local accel_module 00:32:31.365 12:54:50 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:32:31.365 12:54:50 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:32:31.365 12:54:50 -- accel/accel.sh@12 -- # build_accel_config 00:32:31.365 12:54:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:32:31.365 12:54:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:32:31.365 12:54:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:32:31.365 12:54:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:32:31.365 12:54:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:32:31.365 12:54:50 -- accel/accel.sh@41 -- # local IFS=, 00:32:31.365 12:54:50 -- accel/accel.sh@42 -- # jq -r . 00:32:31.365 [2024-07-22 12:54:50.502952] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:32:31.365 [2024-07-22 12:54:50.503047] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70631 ] 00:32:31.365 [2024-07-22 12:54:50.639283] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:31.365 [2024-07-22 12:54:50.715627] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:32.741 12:54:51 -- accel/accel.sh@18 -- # out=' 00:32:32.741 SPDK Configuration: 00:32:32.741 Core mask: 0x1 00:32:32.741 00:32:32.741 Accel Perf Configuration: 00:32:32.741 Workload Type: xor 00:32:32.741 Source buffers: 3 00:32:32.741 Transfer size: 4096 bytes 00:32:32.741 Vector count 1 00:32:32.741 Module: software 00:32:32.741 Queue depth: 32 00:32:32.742 Allocate depth: 32 00:32:32.742 # threads/core: 1 00:32:32.742 Run time: 1 seconds 00:32:32.742 Verify: Yes 00:32:32.742 00:32:32.742 Running for 1 seconds... 00:32:32.742 00:32:32.742 Core,Thread Transfers Bandwidth Failed Miscompares 00:32:32.742 ------------------------------------------------------------------------------------ 00:32:32.742 0,0 249600/s 975 MiB/s 0 0 00:32:32.742 ==================================================================================== 00:32:32.742 Total 249600/s 975 MiB/s 0 0' 00:32:32.742 12:54:51 -- accel/accel.sh@20 -- # IFS=: 00:32:32.742 12:54:51 -- accel/accel.sh@20 -- # read -r var val 00:32:32.742 12:54:51 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:32:32.742 12:54:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:32:32.742 12:54:51 -- accel/accel.sh@12 -- # build_accel_config 00:32:32.742 12:54:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:32:32.742 12:54:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:32:32.742 12:54:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:32:32.742 12:54:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:32:32.742 12:54:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:32:32.742 12:54:51 -- accel/accel.sh@41 -- # local IFS=, 00:32:32.742 12:54:51 -- accel/accel.sh@42 -- # jq -r . 00:32:32.742 [2024-07-22 12:54:51.963497] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:32:32.742 [2024-07-22 12:54:51.963613] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70645 ] 00:32:32.742 [2024-07-22 12:54:52.100501] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:33.000 [2024-07-22 12:54:52.179550] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:33.000 12:54:52 -- accel/accel.sh@21 -- # val= 00:32:33.000 12:54:52 -- accel/accel.sh@22 -- # case "$var" in 00:32:33.000 12:54:52 -- accel/accel.sh@20 -- # IFS=: 00:32:33.000 12:54:52 -- accel/accel.sh@20 -- # read -r var val 00:32:33.000 12:54:52 -- accel/accel.sh@21 -- # val= 00:32:33.000 12:54:52 -- accel/accel.sh@22 -- # case "$var" in 00:32:33.000 12:54:52 -- accel/accel.sh@20 -- # IFS=: 00:32:33.000 12:54:52 -- accel/accel.sh@20 -- # read -r var val 00:32:33.000 12:54:52 -- accel/accel.sh@21 -- # val=0x1 00:32:33.000 12:54:52 -- accel/accel.sh@22 -- # case "$var" in 00:32:33.000 12:54:52 -- accel/accel.sh@20 -- # IFS=: 00:32:33.000 12:54:52 -- accel/accel.sh@20 -- # read -r var val 00:32:33.000 12:54:52 -- accel/accel.sh@21 -- # val= 00:32:33.000 12:54:52 -- accel/accel.sh@22 -- # case "$var" in 00:32:33.000 12:54:52 -- accel/accel.sh@20 -- # IFS=: 00:32:33.000 12:54:52 -- accel/accel.sh@20 -- # read -r var val 00:32:33.000 12:54:52 -- accel/accel.sh@21 -- # val= 00:32:33.000 12:54:52 -- accel/accel.sh@22 -- # case "$var" in 00:32:33.000 12:54:52 -- accel/accel.sh@20 -- # IFS=: 00:32:33.000 12:54:52 -- accel/accel.sh@20 -- # read -r var val 00:32:33.000 12:54:52 -- accel/accel.sh@21 -- # val=xor 00:32:33.000 12:54:52 -- accel/accel.sh@22 -- # case "$var" in 00:32:33.000 12:54:52 -- accel/accel.sh@24 -- # accel_opc=xor 00:32:33.000 12:54:52 -- accel/accel.sh@20 -- # IFS=: 00:32:33.000 12:54:52 -- accel/accel.sh@20 -- # read -r var val 00:32:33.000 12:54:52 -- accel/accel.sh@21 -- # val=3 00:32:33.000 12:54:52 -- accel/accel.sh@22 -- # case "$var" in 00:32:33.000 12:54:52 -- accel/accel.sh@20 -- # IFS=: 00:32:33.000 12:54:52 -- accel/accel.sh@20 -- # read -r var val 00:32:33.000 12:54:52 -- accel/accel.sh@21 -- # val='4096 bytes' 00:32:33.000 12:54:52 -- accel/accel.sh@22 -- # case "$var" in 00:32:33.000 12:54:52 -- accel/accel.sh@20 -- # IFS=: 00:32:33.000 12:54:52 -- accel/accel.sh@20 -- # read -r var val 00:32:33.000 12:54:52 -- accel/accel.sh@21 -- # val= 00:32:33.000 12:54:52 -- accel/accel.sh@22 -- # case "$var" in 00:32:33.000 12:54:52 -- accel/accel.sh@20 -- # IFS=: 00:32:33.000 12:54:52 -- accel/accel.sh@20 -- # read -r var val 00:32:33.000 12:54:52 -- accel/accel.sh@21 -- # val=software 00:32:33.000 12:54:52 -- accel/accel.sh@22 -- # case "$var" in 00:32:33.001 12:54:52 -- accel/accel.sh@23 -- # accel_module=software 00:32:33.001 12:54:52 -- accel/accel.sh@20 -- # IFS=: 00:32:33.001 12:54:52 -- accel/accel.sh@20 -- # read -r var val 00:32:33.001 12:54:52 -- accel/accel.sh@21 -- # val=32 00:32:33.001 12:54:52 -- accel/accel.sh@22 -- # case "$var" in 00:32:33.001 12:54:52 -- accel/accel.sh@20 -- # IFS=: 00:32:33.001 12:54:52 -- accel/accel.sh@20 -- # read -r var val 00:32:33.001 12:54:52 -- accel/accel.sh@21 -- # val=32 00:32:33.001 12:54:52 -- accel/accel.sh@22 -- # case "$var" in 00:32:33.001 12:54:52 -- accel/accel.sh@20 -- # IFS=: 00:32:33.001 12:54:52 -- accel/accel.sh@20 -- # read -r var val 00:32:33.001 12:54:52 -- accel/accel.sh@21 -- # val=1 00:32:33.001 12:54:52 -- accel/accel.sh@22 -- # case "$var" in 00:32:33.001 12:54:52 -- accel/accel.sh@20 -- # IFS=: 00:32:33.001 12:54:52 -- accel/accel.sh@20 -- # read -r var val 00:32:33.001 12:54:52 -- accel/accel.sh@21 -- # val='1 seconds' 00:32:33.001 12:54:52 -- accel/accel.sh@22 -- # case "$var" in 00:32:33.001 12:54:52 -- accel/accel.sh@20 -- # IFS=: 00:32:33.001 12:54:52 -- accel/accel.sh@20 -- # read -r var val 00:32:33.001 12:54:52 -- accel/accel.sh@21 -- # val=Yes 00:32:33.001 12:54:52 -- accel/accel.sh@22 -- # case "$var" in 00:32:33.001 12:54:52 -- accel/accel.sh@20 -- # IFS=: 00:32:33.001 12:54:52 -- accel/accel.sh@20 -- # read -r var val 00:32:33.001 12:54:52 -- accel/accel.sh@21 -- # val= 00:32:33.001 12:54:52 -- accel/accel.sh@22 -- # case "$var" in 00:32:33.001 12:54:52 -- accel/accel.sh@20 -- # IFS=: 00:32:33.001 12:54:52 -- accel/accel.sh@20 -- # read -r var val 00:32:33.001 12:54:52 -- accel/accel.sh@21 -- # val= 00:32:33.001 12:54:52 -- accel/accel.sh@22 -- # case "$var" in 00:32:33.001 12:54:52 -- accel/accel.sh@20 -- # IFS=: 00:32:33.001 12:54:52 -- accel/accel.sh@20 -- # read -r var val 00:32:34.378 12:54:53 -- accel/accel.sh@21 -- # val= 00:32:34.378 12:54:53 -- accel/accel.sh@22 -- # case "$var" in 00:32:34.378 12:54:53 -- accel/accel.sh@20 -- # IFS=: 00:32:34.378 12:54:53 -- accel/accel.sh@20 -- # read -r var val 00:32:34.378 12:54:53 -- accel/accel.sh@21 -- # val= 00:32:34.378 12:54:53 -- accel/accel.sh@22 -- # case "$var" in 00:32:34.378 12:54:53 -- accel/accel.sh@20 -- # IFS=: 00:32:34.378 12:54:53 -- accel/accel.sh@20 -- # read -r var val 00:32:34.378 12:54:53 -- accel/accel.sh@21 -- # val= 00:32:34.378 12:54:53 -- accel/accel.sh@22 -- # case "$var" in 00:32:34.378 12:54:53 -- accel/accel.sh@20 -- # IFS=: 00:32:34.378 12:54:53 -- accel/accel.sh@20 -- # read -r var val 00:32:34.378 12:54:53 -- accel/accel.sh@21 -- # val= 00:32:34.378 12:54:53 -- accel/accel.sh@22 -- # case "$var" in 00:32:34.378 12:54:53 -- accel/accel.sh@20 -- # IFS=: 00:32:34.378 12:54:53 -- accel/accel.sh@20 -- # read -r var val 00:32:34.378 12:54:53 -- accel/accel.sh@21 -- # val= 00:32:34.378 12:54:53 -- accel/accel.sh@22 -- # case "$var" in 00:32:34.378 12:54:53 -- accel/accel.sh@20 -- # IFS=: 00:32:34.378 12:54:53 -- accel/accel.sh@20 -- # read -r var val 00:32:34.378 12:54:53 -- accel/accel.sh@21 -- # val= 00:32:34.378 ************************************ 00:32:34.378 END TEST accel_xor 00:32:34.378 ************************************ 00:32:34.378 12:54:53 -- accel/accel.sh@22 -- # case "$var" in 00:32:34.378 12:54:53 -- accel/accel.sh@20 -- # IFS=: 00:32:34.378 12:54:53 -- accel/accel.sh@20 -- # read -r var val 00:32:34.378 12:54:53 -- accel/accel.sh@28 -- # [[ -n software ]] 00:32:34.378 12:54:53 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:32:34.378 12:54:53 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:34.378 00:32:34.378 real 0m2.929s 00:32:34.378 user 0m2.491s 00:32:34.378 sys 0m0.234s 00:32:34.378 12:54:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:34.378 12:54:53 -- common/autotest_common.sh@10 -- # set +x 00:32:34.378 12:54:53 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:32:34.378 12:54:53 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:32:34.378 12:54:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:34.378 12:54:53 -- common/autotest_common.sh@10 -- # set +x 00:32:34.378 ************************************ 00:32:34.378 START TEST accel_dif_verify 00:32:34.378 ************************************ 00:32:34.378 12:54:53 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_verify 00:32:34.378 12:54:53 -- accel/accel.sh@16 -- # local accel_opc 00:32:34.378 12:54:53 -- accel/accel.sh@17 -- # local accel_module 00:32:34.378 12:54:53 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:32:34.378 12:54:53 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:32:34.378 12:54:53 -- accel/accel.sh@12 -- # build_accel_config 00:32:34.378 12:54:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:32:34.378 12:54:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:32:34.378 12:54:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:32:34.378 12:54:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:32:34.378 12:54:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:32:34.378 12:54:53 -- accel/accel.sh@41 -- # local IFS=, 00:32:34.378 12:54:53 -- accel/accel.sh@42 -- # jq -r . 00:32:34.378 [2024-07-22 12:54:53.487081] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:32:34.378 [2024-07-22 12:54:53.487201] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70685 ] 00:32:34.378 [2024-07-22 12:54:53.623245] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:34.378 [2024-07-22 12:54:53.697694] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:35.754 12:54:54 -- accel/accel.sh@18 -- # out=' 00:32:35.754 SPDK Configuration: 00:32:35.754 Core mask: 0x1 00:32:35.754 00:32:35.754 Accel Perf Configuration: 00:32:35.754 Workload Type: dif_verify 00:32:35.754 Vector size: 4096 bytes 00:32:35.754 Transfer size: 4096 bytes 00:32:35.754 Block size: 512 bytes 00:32:35.754 Metadata size: 8 bytes 00:32:35.754 Vector count 1 00:32:35.754 Module: software 00:32:35.754 Queue depth: 32 00:32:35.754 Allocate depth: 32 00:32:35.754 # threads/core: 1 00:32:35.754 Run time: 1 seconds 00:32:35.754 Verify: No 00:32:35.754 00:32:35.754 Running for 1 seconds... 00:32:35.754 00:32:35.754 Core,Thread Transfers Bandwidth Failed Miscompares 00:32:35.754 ------------------------------------------------------------------------------------ 00:32:35.754 0,0 109568/s 434 MiB/s 0 0 00:32:35.754 ==================================================================================== 00:32:35.754 Total 109568/s 428 MiB/s 0 0' 00:32:35.754 12:54:54 -- accel/accel.sh@20 -- # IFS=: 00:32:35.754 12:54:54 -- accel/accel.sh@20 -- # read -r var val 00:32:35.754 12:54:54 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:32:35.754 12:54:54 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:32:35.754 12:54:54 -- accel/accel.sh@12 -- # build_accel_config 00:32:35.754 12:54:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:32:35.754 12:54:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:32:35.754 12:54:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:32:35.754 12:54:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:32:35.754 12:54:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:32:35.754 12:54:54 -- accel/accel.sh@41 -- # local IFS=, 00:32:35.754 12:54:54 -- accel/accel.sh@42 -- # jq -r . 00:32:35.754 [2024-07-22 12:54:54.936463] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:32:35.754 [2024-07-22 12:54:54.936551] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70699 ] 00:32:35.754 [2024-07-22 12:54:55.071341] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:35.754 [2024-07-22 12:54:55.155620] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:36.071 12:54:55 -- accel/accel.sh@21 -- # val= 00:32:36.071 12:54:55 -- accel/accel.sh@22 -- # case "$var" in 00:32:36.071 12:54:55 -- accel/accel.sh@20 -- # IFS=: 00:32:36.071 12:54:55 -- accel/accel.sh@20 -- # read -r var val 00:32:36.071 12:54:55 -- accel/accel.sh@21 -- # val= 00:32:36.071 12:54:55 -- accel/accel.sh@22 -- # case "$var" in 00:32:36.071 12:54:55 -- accel/accel.sh@20 -- # IFS=: 00:32:36.071 12:54:55 -- accel/accel.sh@20 -- # read -r var val 00:32:36.071 12:54:55 -- accel/accel.sh@21 -- # val=0x1 00:32:36.071 12:54:55 -- accel/accel.sh@22 -- # case "$var" in 00:32:36.071 12:54:55 -- accel/accel.sh@20 -- # IFS=: 00:32:36.071 12:54:55 -- accel/accel.sh@20 -- # read -r var val 00:32:36.071 12:54:55 -- accel/accel.sh@21 -- # val= 00:32:36.071 12:54:55 -- accel/accel.sh@22 -- # case "$var" in 00:32:36.071 12:54:55 -- accel/accel.sh@20 -- # IFS=: 00:32:36.071 12:54:55 -- accel/accel.sh@20 -- # read -r var val 00:32:36.071 12:54:55 -- accel/accel.sh@21 -- # val= 00:32:36.071 12:54:55 -- accel/accel.sh@22 -- # case "$var" in 00:32:36.071 12:54:55 -- accel/accel.sh@20 -- # IFS=: 00:32:36.071 12:54:55 -- accel/accel.sh@20 -- # read -r var val 00:32:36.071 12:54:55 -- accel/accel.sh@21 -- # val=dif_verify 00:32:36.071 12:54:55 -- accel/accel.sh@22 -- # case "$var" in 00:32:36.071 12:54:55 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:32:36.071 12:54:55 -- accel/accel.sh@20 -- # IFS=: 00:32:36.071 12:54:55 -- accel/accel.sh@20 -- # read -r var val 00:32:36.071 12:54:55 -- accel/accel.sh@21 -- # val='4096 bytes' 00:32:36.071 12:54:55 -- accel/accel.sh@22 -- # case "$var" in 00:32:36.071 12:54:55 -- accel/accel.sh@20 -- # IFS=: 00:32:36.071 12:54:55 -- accel/accel.sh@20 -- # read -r var val 00:32:36.071 12:54:55 -- accel/accel.sh@21 -- # val='4096 bytes' 00:32:36.071 12:54:55 -- accel/accel.sh@22 -- # case "$var" in 00:32:36.071 12:54:55 -- accel/accel.sh@20 -- # IFS=: 00:32:36.071 12:54:55 -- accel/accel.sh@20 -- # read -r var val 00:32:36.071 12:54:55 -- accel/accel.sh@21 -- # val='512 bytes' 00:32:36.071 12:54:55 -- accel/accel.sh@22 -- # case "$var" in 00:32:36.071 12:54:55 -- accel/accel.sh@20 -- # IFS=: 00:32:36.071 12:54:55 -- accel/accel.sh@20 -- # read -r var val 00:32:36.071 12:54:55 -- accel/accel.sh@21 -- # val='8 bytes' 00:32:36.071 12:54:55 -- accel/accel.sh@22 -- # case "$var" in 00:32:36.071 12:54:55 -- accel/accel.sh@20 -- # IFS=: 00:32:36.071 12:54:55 -- accel/accel.sh@20 -- # read -r var val 00:32:36.071 12:54:55 -- accel/accel.sh@21 -- # val= 00:32:36.071 12:54:55 -- accel/accel.sh@22 -- # case "$var" in 00:32:36.071 12:54:55 -- accel/accel.sh@20 -- # IFS=: 00:32:36.071 12:54:55 -- accel/accel.sh@20 -- # read -r var val 00:32:36.071 12:54:55 -- accel/accel.sh@21 -- # val=software 00:32:36.071 12:54:55 -- accel/accel.sh@22 -- # case "$var" in 00:32:36.071 12:54:55 -- accel/accel.sh@23 -- # accel_module=software 00:32:36.072 12:54:55 -- accel/accel.sh@20 -- # IFS=: 00:32:36.072 12:54:55 -- accel/accel.sh@20 -- # read -r var val 00:32:36.072 12:54:55 -- accel/accel.sh@21 -- # val=32 00:32:36.072 12:54:55 -- accel/accel.sh@22 -- # case "$var" in 00:32:36.072 12:54:55 -- accel/accel.sh@20 -- # IFS=: 00:32:36.072 12:54:55 -- accel/accel.sh@20 -- # read -r var val 00:32:36.072 12:54:55 -- accel/accel.sh@21 -- # val=32 00:32:36.072 12:54:55 -- accel/accel.sh@22 -- # case "$var" in 00:32:36.072 12:54:55 -- accel/accel.sh@20 -- # IFS=: 00:32:36.072 12:54:55 -- accel/accel.sh@20 -- # read -r var val 00:32:36.072 12:54:55 -- accel/accel.sh@21 -- # val=1 00:32:36.072 12:54:55 -- accel/accel.sh@22 -- # case "$var" in 00:32:36.072 12:54:55 -- accel/accel.sh@20 -- # IFS=: 00:32:36.072 12:54:55 -- accel/accel.sh@20 -- # read -r var val 00:32:36.072 12:54:55 -- accel/accel.sh@21 -- # val='1 seconds' 00:32:36.072 12:54:55 -- accel/accel.sh@22 -- # case "$var" in 00:32:36.072 12:54:55 -- accel/accel.sh@20 -- # IFS=: 00:32:36.072 12:54:55 -- accel/accel.sh@20 -- # read -r var val 00:32:36.072 12:54:55 -- accel/accel.sh@21 -- # val=No 00:32:36.072 12:54:55 -- accel/accel.sh@22 -- # case "$var" in 00:32:36.072 12:54:55 -- accel/accel.sh@20 -- # IFS=: 00:32:36.072 12:54:55 -- accel/accel.sh@20 -- # read -r var val 00:32:36.072 12:54:55 -- accel/accel.sh@21 -- # val= 00:32:36.072 12:54:55 -- accel/accel.sh@22 -- # case "$var" in 00:32:36.072 12:54:55 -- accel/accel.sh@20 -- # IFS=: 00:32:36.072 12:54:55 -- accel/accel.sh@20 -- # read -r var val 00:32:36.072 12:54:55 -- accel/accel.sh@21 -- # val= 00:32:36.072 12:54:55 -- accel/accel.sh@22 -- # case "$var" in 00:32:36.072 12:54:55 -- accel/accel.sh@20 -- # IFS=: 00:32:36.072 12:54:55 -- accel/accel.sh@20 -- # read -r var val 00:32:37.120 12:54:56 -- accel/accel.sh@21 -- # val= 00:32:37.120 12:54:56 -- accel/accel.sh@22 -- # case "$var" in 00:32:37.120 12:54:56 -- accel/accel.sh@20 -- # IFS=: 00:32:37.120 12:54:56 -- accel/accel.sh@20 -- # read -r var val 00:32:37.120 12:54:56 -- accel/accel.sh@21 -- # val= 00:32:37.120 12:54:56 -- accel/accel.sh@22 -- # case "$var" in 00:32:37.120 12:54:56 -- accel/accel.sh@20 -- # IFS=: 00:32:37.120 12:54:56 -- accel/accel.sh@20 -- # read -r var val 00:32:37.120 12:54:56 -- accel/accel.sh@21 -- # val= 00:32:37.120 12:54:56 -- accel/accel.sh@22 -- # case "$var" in 00:32:37.120 12:54:56 -- accel/accel.sh@20 -- # IFS=: 00:32:37.120 12:54:56 -- accel/accel.sh@20 -- # read -r var val 00:32:37.120 12:54:56 -- accel/accel.sh@21 -- # val= 00:32:37.120 12:54:56 -- accel/accel.sh@22 -- # case "$var" in 00:32:37.120 12:54:56 -- accel/accel.sh@20 -- # IFS=: 00:32:37.120 12:54:56 -- accel/accel.sh@20 -- # read -r var val 00:32:37.120 12:54:56 -- accel/accel.sh@21 -- # val= 00:32:37.120 12:54:56 -- accel/accel.sh@22 -- # case "$var" in 00:32:37.120 12:54:56 -- accel/accel.sh@20 -- # IFS=: 00:32:37.120 12:54:56 -- accel/accel.sh@20 -- # read -r var val 00:32:37.120 12:54:56 -- accel/accel.sh@21 -- # val= 00:32:37.120 ************************************ 00:32:37.120 END TEST accel_dif_verify 00:32:37.120 ************************************ 00:32:37.120 12:54:56 -- accel/accel.sh@22 -- # case "$var" in 00:32:37.120 12:54:56 -- accel/accel.sh@20 -- # IFS=: 00:32:37.120 12:54:56 -- accel/accel.sh@20 -- # read -r var val 00:32:37.120 12:54:56 -- accel/accel.sh@28 -- # [[ -n software ]] 00:32:37.120 12:54:56 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:32:37.120 12:54:56 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:37.120 00:32:37.120 real 0m2.917s 00:32:37.120 user 0m2.481s 00:32:37.120 sys 0m0.237s 00:32:37.120 12:54:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:37.120 12:54:56 -- common/autotest_common.sh@10 -- # set +x 00:32:37.120 12:54:56 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:32:37.120 12:54:56 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:32:37.120 12:54:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:37.120 12:54:56 -- common/autotest_common.sh@10 -- # set +x 00:32:37.120 ************************************ 00:32:37.120 START TEST accel_dif_generate 00:32:37.120 ************************************ 00:32:37.120 12:54:56 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate 00:32:37.120 12:54:56 -- accel/accel.sh@16 -- # local accel_opc 00:32:37.120 12:54:56 -- accel/accel.sh@17 -- # local accel_module 00:32:37.120 12:54:56 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:32:37.120 12:54:56 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:32:37.120 12:54:56 -- accel/accel.sh@12 -- # build_accel_config 00:32:37.120 12:54:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:32:37.120 12:54:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:32:37.120 12:54:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:32:37.120 12:54:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:32:37.120 12:54:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:32:37.120 12:54:56 -- accel/accel.sh@41 -- # local IFS=, 00:32:37.120 12:54:56 -- accel/accel.sh@42 -- # jq -r . 00:32:37.120 [2024-07-22 12:54:56.448749] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:32:37.120 [2024-07-22 12:54:56.448840] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70739 ] 00:32:37.379 [2024-07-22 12:54:56.589551] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:37.379 [2024-07-22 12:54:56.685426] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:38.775 12:54:57 -- accel/accel.sh@18 -- # out=' 00:32:38.775 SPDK Configuration: 00:32:38.775 Core mask: 0x1 00:32:38.775 00:32:38.775 Accel Perf Configuration: 00:32:38.775 Workload Type: dif_generate 00:32:38.775 Vector size: 4096 bytes 00:32:38.775 Transfer size: 4096 bytes 00:32:38.775 Block size: 512 bytes 00:32:38.775 Metadata size: 8 bytes 00:32:38.775 Vector count 1 00:32:38.775 Module: software 00:32:38.775 Queue depth: 32 00:32:38.775 Allocate depth: 32 00:32:38.775 # threads/core: 1 00:32:38.775 Run time: 1 seconds 00:32:38.775 Verify: No 00:32:38.775 00:32:38.775 Running for 1 seconds... 00:32:38.775 00:32:38.775 Core,Thread Transfers Bandwidth Failed Miscompares 00:32:38.775 ------------------------------------------------------------------------------------ 00:32:38.775 0,0 119040/s 472 MiB/s 0 0 00:32:38.775 ==================================================================================== 00:32:38.775 Total 119040/s 465 MiB/s 0 0' 00:32:38.775 12:54:57 -- accel/accel.sh@20 -- # IFS=: 00:32:38.775 12:54:57 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:32:38.775 12:54:57 -- accel/accel.sh@20 -- # read -r var val 00:32:38.775 12:54:57 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:32:38.775 12:54:57 -- accel/accel.sh@12 -- # build_accel_config 00:32:38.775 12:54:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:32:38.775 12:54:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:32:38.775 12:54:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:32:38.775 12:54:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:32:38.775 12:54:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:32:38.775 12:54:57 -- accel/accel.sh@41 -- # local IFS=, 00:32:38.775 12:54:57 -- accel/accel.sh@42 -- # jq -r . 00:32:38.775 [2024-07-22 12:54:57.939391] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:32:38.775 [2024-07-22 12:54:57.939518] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70753 ] 00:32:38.776 [2024-07-22 12:54:58.075933] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:38.776 [2024-07-22 12:54:58.172264] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:39.035 12:54:58 -- accel/accel.sh@21 -- # val= 00:32:39.035 12:54:58 -- accel/accel.sh@22 -- # case "$var" in 00:32:39.035 12:54:58 -- accel/accel.sh@20 -- # IFS=: 00:32:39.035 12:54:58 -- accel/accel.sh@20 -- # read -r var val 00:32:39.035 12:54:58 -- accel/accel.sh@21 -- # val= 00:32:39.035 12:54:58 -- accel/accel.sh@22 -- # case "$var" in 00:32:39.035 12:54:58 -- accel/accel.sh@20 -- # IFS=: 00:32:39.035 12:54:58 -- accel/accel.sh@20 -- # read -r var val 00:32:39.035 12:54:58 -- accel/accel.sh@21 -- # val=0x1 00:32:39.035 12:54:58 -- accel/accel.sh@22 -- # case "$var" in 00:32:39.035 12:54:58 -- accel/accel.sh@20 -- # IFS=: 00:32:39.035 12:54:58 -- accel/accel.sh@20 -- # read -r var val 00:32:39.035 12:54:58 -- accel/accel.sh@21 -- # val= 00:32:39.035 12:54:58 -- accel/accel.sh@22 -- # case "$var" in 00:32:39.035 12:54:58 -- accel/accel.sh@20 -- # IFS=: 00:32:39.035 12:54:58 -- accel/accel.sh@20 -- # read -r var val 00:32:39.035 12:54:58 -- accel/accel.sh@21 -- # val= 00:32:39.035 12:54:58 -- accel/accel.sh@22 -- # case "$var" in 00:32:39.035 12:54:58 -- accel/accel.sh@20 -- # IFS=: 00:32:39.035 12:54:58 -- accel/accel.sh@20 -- # read -r var val 00:32:39.035 12:54:58 -- accel/accel.sh@21 -- # val=dif_generate 00:32:39.035 12:54:58 -- accel/accel.sh@22 -- # case "$var" in 00:32:39.035 12:54:58 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:32:39.035 12:54:58 -- accel/accel.sh@20 -- # IFS=: 00:32:39.035 12:54:58 -- accel/accel.sh@20 -- # read -r var val 00:32:39.035 12:54:58 -- accel/accel.sh@21 -- # val='4096 bytes' 00:32:39.035 12:54:58 -- accel/accel.sh@22 -- # case "$var" in 00:32:39.035 12:54:58 -- accel/accel.sh@20 -- # IFS=: 00:32:39.035 12:54:58 -- accel/accel.sh@20 -- # read -r var val 00:32:39.035 12:54:58 -- accel/accel.sh@21 -- # val='4096 bytes' 00:32:39.035 12:54:58 -- accel/accel.sh@22 -- # case "$var" in 00:32:39.035 12:54:58 -- accel/accel.sh@20 -- # IFS=: 00:32:39.035 12:54:58 -- accel/accel.sh@20 -- # read -r var val 00:32:39.035 12:54:58 -- accel/accel.sh@21 -- # val='512 bytes' 00:32:39.035 12:54:58 -- accel/accel.sh@22 -- # case "$var" in 00:32:39.035 12:54:58 -- accel/accel.sh@20 -- # IFS=: 00:32:39.035 12:54:58 -- accel/accel.sh@20 -- # read -r var val 00:32:39.035 12:54:58 -- accel/accel.sh@21 -- # val='8 bytes' 00:32:39.035 12:54:58 -- accel/accel.sh@22 -- # case "$var" in 00:32:39.035 12:54:58 -- accel/accel.sh@20 -- # IFS=: 00:32:39.035 12:54:58 -- accel/accel.sh@20 -- # read -r var val 00:32:39.035 12:54:58 -- accel/accel.sh@21 -- # val= 00:32:39.035 12:54:58 -- accel/accel.sh@22 -- # case "$var" in 00:32:39.035 12:54:58 -- accel/accel.sh@20 -- # IFS=: 00:32:39.035 12:54:58 -- accel/accel.sh@20 -- # read -r var val 00:32:39.035 12:54:58 -- accel/accel.sh@21 -- # val=software 00:32:39.035 12:54:58 -- accel/accel.sh@22 -- # case "$var" in 00:32:39.035 12:54:58 -- accel/accel.sh@23 -- # accel_module=software 00:32:39.035 12:54:58 -- accel/accel.sh@20 -- # IFS=: 00:32:39.035 12:54:58 -- accel/accel.sh@20 -- # read -r var val 00:32:39.035 12:54:58 -- accel/accel.sh@21 -- # val=32 00:32:39.035 12:54:58 -- accel/accel.sh@22 -- # case "$var" in 00:32:39.035 12:54:58 -- accel/accel.sh@20 -- # IFS=: 00:32:39.035 12:54:58 -- accel/accel.sh@20 -- # read -r var val 00:32:39.035 12:54:58 -- accel/accel.sh@21 -- # val=32 00:32:39.035 12:54:58 -- accel/accel.sh@22 -- # case "$var" in 00:32:39.035 12:54:58 -- accel/accel.sh@20 -- # IFS=: 00:32:39.035 12:54:58 -- accel/accel.sh@20 -- # read -r var val 00:32:39.035 12:54:58 -- accel/accel.sh@21 -- # val=1 00:32:39.035 12:54:58 -- accel/accel.sh@22 -- # case "$var" in 00:32:39.035 12:54:58 -- accel/accel.sh@20 -- # IFS=: 00:32:39.035 12:54:58 -- accel/accel.sh@20 -- # read -r var val 00:32:39.035 12:54:58 -- accel/accel.sh@21 -- # val='1 seconds' 00:32:39.035 12:54:58 -- accel/accel.sh@22 -- # case "$var" in 00:32:39.035 12:54:58 -- accel/accel.sh@20 -- # IFS=: 00:32:39.035 12:54:58 -- accel/accel.sh@20 -- # read -r var val 00:32:39.035 12:54:58 -- accel/accel.sh@21 -- # val=No 00:32:39.035 12:54:58 -- accel/accel.sh@22 -- # case "$var" in 00:32:39.035 12:54:58 -- accel/accel.sh@20 -- # IFS=: 00:32:39.035 12:54:58 -- accel/accel.sh@20 -- # read -r var val 00:32:39.035 12:54:58 -- accel/accel.sh@21 -- # val= 00:32:39.035 12:54:58 -- accel/accel.sh@22 -- # case "$var" in 00:32:39.035 12:54:58 -- accel/accel.sh@20 -- # IFS=: 00:32:39.035 12:54:58 -- accel/accel.sh@20 -- # read -r var val 00:32:39.035 12:54:58 -- accel/accel.sh@21 -- # val= 00:32:39.035 12:54:58 -- accel/accel.sh@22 -- # case "$var" in 00:32:39.035 12:54:58 -- accel/accel.sh@20 -- # IFS=: 00:32:39.035 12:54:58 -- accel/accel.sh@20 -- # read -r var val 00:32:39.972 12:54:59 -- accel/accel.sh@21 -- # val= 00:32:39.972 12:54:59 -- accel/accel.sh@22 -- # case "$var" in 00:32:39.972 12:54:59 -- accel/accel.sh@20 -- # IFS=: 00:32:39.972 12:54:59 -- accel/accel.sh@20 -- # read -r var val 00:32:39.972 12:54:59 -- accel/accel.sh@21 -- # val= 00:32:39.972 12:54:59 -- accel/accel.sh@22 -- # case "$var" in 00:32:39.972 12:54:59 -- accel/accel.sh@20 -- # IFS=: 00:32:39.972 12:54:59 -- accel/accel.sh@20 -- # read -r var val 00:32:39.972 12:54:59 -- accel/accel.sh@21 -- # val= 00:32:39.972 12:54:59 -- accel/accel.sh@22 -- # case "$var" in 00:32:39.972 12:54:59 -- accel/accel.sh@20 -- # IFS=: 00:32:39.972 12:54:59 -- accel/accel.sh@20 -- # read -r var val 00:32:39.972 12:54:59 -- accel/accel.sh@21 -- # val= 00:32:39.972 12:54:59 -- accel/accel.sh@22 -- # case "$var" in 00:32:39.972 12:54:59 -- accel/accel.sh@20 -- # IFS=: 00:32:39.972 12:54:59 -- accel/accel.sh@20 -- # read -r var val 00:32:39.972 12:54:59 -- accel/accel.sh@21 -- # val= 00:32:40.232 12:54:59 -- accel/accel.sh@22 -- # case "$var" in 00:32:40.232 12:54:59 -- accel/accel.sh@20 -- # IFS=: 00:32:40.232 12:54:59 -- accel/accel.sh@20 -- # read -r var val 00:32:40.232 12:54:59 -- accel/accel.sh@21 -- # val= 00:32:40.232 12:54:59 -- accel/accel.sh@22 -- # case "$var" in 00:32:40.232 12:54:59 -- accel/accel.sh@20 -- # IFS=: 00:32:40.232 12:54:59 -- accel/accel.sh@20 -- # read -r var val 00:32:40.232 12:54:59 -- accel/accel.sh@28 -- # [[ -n software ]] 00:32:40.232 12:54:59 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:32:40.232 12:54:59 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:40.232 00:32:40.232 real 0m2.970s 00:32:40.232 user 0m2.515s 00:32:40.232 sys 0m0.255s 00:32:40.232 12:54:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:40.232 ************************************ 00:32:40.232 END TEST accel_dif_generate 00:32:40.232 ************************************ 00:32:40.232 12:54:59 -- common/autotest_common.sh@10 -- # set +x 00:32:40.232 12:54:59 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:32:40.232 12:54:59 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:32:40.232 12:54:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:40.232 12:54:59 -- common/autotest_common.sh@10 -- # set +x 00:32:40.232 ************************************ 00:32:40.232 START TEST accel_dif_generate_copy 00:32:40.232 ************************************ 00:32:40.232 12:54:59 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate_copy 00:32:40.232 12:54:59 -- accel/accel.sh@16 -- # local accel_opc 00:32:40.232 12:54:59 -- accel/accel.sh@17 -- # local accel_module 00:32:40.232 12:54:59 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:32:40.232 12:54:59 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:32:40.232 12:54:59 -- accel/accel.sh@12 -- # build_accel_config 00:32:40.232 12:54:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:32:40.232 12:54:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:32:40.232 12:54:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:32:40.232 12:54:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:32:40.232 12:54:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:32:40.232 12:54:59 -- accel/accel.sh@41 -- # local IFS=, 00:32:40.232 12:54:59 -- accel/accel.sh@42 -- # jq -r . 00:32:40.232 [2024-07-22 12:54:59.467286] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:32:40.232 [2024-07-22 12:54:59.467399] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70788 ] 00:32:40.232 [2024-07-22 12:54:59.600613] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:40.493 [2024-07-22 12:54:59.697725] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:41.872 12:55:00 -- accel/accel.sh@18 -- # out=' 00:32:41.872 SPDK Configuration: 00:32:41.872 Core mask: 0x1 00:32:41.872 00:32:41.872 Accel Perf Configuration: 00:32:41.872 Workload Type: dif_generate_copy 00:32:41.872 Vector size: 4096 bytes 00:32:41.872 Transfer size: 4096 bytes 00:32:41.872 Vector count 1 00:32:41.872 Module: software 00:32:41.872 Queue depth: 32 00:32:41.872 Allocate depth: 32 00:32:41.872 # threads/core: 1 00:32:41.872 Run time: 1 seconds 00:32:41.872 Verify: No 00:32:41.872 00:32:41.872 Running for 1 seconds... 00:32:41.872 00:32:41.872 Core,Thread Transfers Bandwidth Failed Miscompares 00:32:41.872 ------------------------------------------------------------------------------------ 00:32:41.872 0,0 92992/s 368 MiB/s 0 0 00:32:41.872 ==================================================================================== 00:32:41.872 Total 92992/s 363 MiB/s 0 0' 00:32:41.872 12:55:00 -- accel/accel.sh@20 -- # IFS=: 00:32:41.872 12:55:00 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:32:41.872 12:55:00 -- accel/accel.sh@20 -- # read -r var val 00:32:41.872 12:55:00 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:32:41.872 12:55:00 -- accel/accel.sh@12 -- # build_accel_config 00:32:41.872 12:55:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:32:41.872 12:55:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:32:41.872 12:55:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:32:41.872 12:55:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:32:41.872 12:55:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:32:41.872 12:55:00 -- accel/accel.sh@41 -- # local IFS=, 00:32:41.872 12:55:00 -- accel/accel.sh@42 -- # jq -r . 00:32:41.872 [2024-07-22 12:55:00.954528] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:32:41.872 [2024-07-22 12:55:00.954658] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70807 ] 00:32:41.872 [2024-07-22 12:55:01.103990] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:41.872 [2024-07-22 12:55:01.196677] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:41.872 12:55:01 -- accel/accel.sh@21 -- # val= 00:32:41.872 12:55:01 -- accel/accel.sh@22 -- # case "$var" in 00:32:41.872 12:55:01 -- accel/accel.sh@20 -- # IFS=: 00:32:41.872 12:55:01 -- accel/accel.sh@20 -- # read -r var val 00:32:41.872 12:55:01 -- accel/accel.sh@21 -- # val= 00:32:41.872 12:55:01 -- accel/accel.sh@22 -- # case "$var" in 00:32:41.872 12:55:01 -- accel/accel.sh@20 -- # IFS=: 00:32:41.872 12:55:01 -- accel/accel.sh@20 -- # read -r var val 00:32:41.872 12:55:01 -- accel/accel.sh@21 -- # val=0x1 00:32:41.872 12:55:01 -- accel/accel.sh@22 -- # case "$var" in 00:32:41.872 12:55:01 -- accel/accel.sh@20 -- # IFS=: 00:32:41.872 12:55:01 -- accel/accel.sh@20 -- # read -r var val 00:32:41.872 12:55:01 -- accel/accel.sh@21 -- # val= 00:32:41.872 12:55:01 -- accel/accel.sh@22 -- # case "$var" in 00:32:41.872 12:55:01 -- accel/accel.sh@20 -- # IFS=: 00:32:41.872 12:55:01 -- accel/accel.sh@20 -- # read -r var val 00:32:41.872 12:55:01 -- accel/accel.sh@21 -- # val= 00:32:41.872 12:55:01 -- accel/accel.sh@22 -- # case "$var" in 00:32:41.872 12:55:01 -- accel/accel.sh@20 -- # IFS=: 00:32:41.872 12:55:01 -- accel/accel.sh@20 -- # read -r var val 00:32:41.872 12:55:01 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:32:41.872 12:55:01 -- accel/accel.sh@22 -- # case "$var" in 00:32:41.872 12:55:01 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:32:41.872 12:55:01 -- accel/accel.sh@20 -- # IFS=: 00:32:41.872 12:55:01 -- accel/accel.sh@20 -- # read -r var val 00:32:41.872 12:55:01 -- accel/accel.sh@21 -- # val='4096 bytes' 00:32:41.872 12:55:01 -- accel/accel.sh@22 -- # case "$var" in 00:32:41.872 12:55:01 -- accel/accel.sh@20 -- # IFS=: 00:32:41.872 12:55:01 -- accel/accel.sh@20 -- # read -r var val 00:32:41.872 12:55:01 -- accel/accel.sh@21 -- # val='4096 bytes' 00:32:41.872 12:55:01 -- accel/accel.sh@22 -- # case "$var" in 00:32:41.872 12:55:01 -- accel/accel.sh@20 -- # IFS=: 00:32:41.872 12:55:01 -- accel/accel.sh@20 -- # read -r var val 00:32:41.872 12:55:01 -- accel/accel.sh@21 -- # val= 00:32:41.872 12:55:01 -- accel/accel.sh@22 -- # case "$var" in 00:32:41.872 12:55:01 -- accel/accel.sh@20 -- # IFS=: 00:32:41.872 12:55:01 -- accel/accel.sh@20 -- # read -r var val 00:32:41.872 12:55:01 -- accel/accel.sh@21 -- # val=software 00:32:41.872 12:55:01 -- accel/accel.sh@22 -- # case "$var" in 00:32:41.872 12:55:01 -- accel/accel.sh@23 -- # accel_module=software 00:32:41.872 12:55:01 -- accel/accel.sh@20 -- # IFS=: 00:32:41.872 12:55:01 -- accel/accel.sh@20 -- # read -r var val 00:32:41.872 12:55:01 -- accel/accel.sh@21 -- # val=32 00:32:41.872 12:55:01 -- accel/accel.sh@22 -- # case "$var" in 00:32:41.872 12:55:01 -- accel/accel.sh@20 -- # IFS=: 00:32:41.872 12:55:01 -- accel/accel.sh@20 -- # read -r var val 00:32:41.872 12:55:01 -- accel/accel.sh@21 -- # val=32 00:32:41.872 12:55:01 -- accel/accel.sh@22 -- # case "$var" in 00:32:41.872 12:55:01 -- accel/accel.sh@20 -- # IFS=: 00:32:41.872 12:55:01 -- accel/accel.sh@20 -- # read -r var val 00:32:41.872 12:55:01 -- accel/accel.sh@21 -- # val=1 00:32:41.872 12:55:01 -- accel/accel.sh@22 -- # case "$var" in 00:32:41.872 12:55:01 -- accel/accel.sh@20 -- # IFS=: 00:32:41.872 12:55:01 -- accel/accel.sh@20 -- # read -r var val 00:32:41.872 12:55:01 -- accel/accel.sh@21 -- # val='1 seconds' 00:32:41.872 12:55:01 -- accel/accel.sh@22 -- # case "$var" in 00:32:41.872 12:55:01 -- accel/accel.sh@20 -- # IFS=: 00:32:41.872 12:55:01 -- accel/accel.sh@20 -- # read -r var val 00:32:41.872 12:55:01 -- accel/accel.sh@21 -- # val=No 00:32:41.872 12:55:01 -- accel/accel.sh@22 -- # case "$var" in 00:32:41.872 12:55:01 -- accel/accel.sh@20 -- # IFS=: 00:32:41.872 12:55:01 -- accel/accel.sh@20 -- # read -r var val 00:32:41.872 12:55:01 -- accel/accel.sh@21 -- # val= 00:32:41.872 12:55:01 -- accel/accel.sh@22 -- # case "$var" in 00:32:41.872 12:55:01 -- accel/accel.sh@20 -- # IFS=: 00:32:41.872 12:55:01 -- accel/accel.sh@20 -- # read -r var val 00:32:41.872 12:55:01 -- accel/accel.sh@21 -- # val= 00:32:41.872 12:55:01 -- accel/accel.sh@22 -- # case "$var" in 00:32:41.872 12:55:01 -- accel/accel.sh@20 -- # IFS=: 00:32:41.872 12:55:01 -- accel/accel.sh@20 -- # read -r var val 00:32:43.250 12:55:02 -- accel/accel.sh@21 -- # val= 00:32:43.250 12:55:02 -- accel/accel.sh@22 -- # case "$var" in 00:32:43.250 12:55:02 -- accel/accel.sh@20 -- # IFS=: 00:32:43.250 12:55:02 -- accel/accel.sh@20 -- # read -r var val 00:32:43.250 12:55:02 -- accel/accel.sh@21 -- # val= 00:32:43.250 12:55:02 -- accel/accel.sh@22 -- # case "$var" in 00:32:43.250 12:55:02 -- accel/accel.sh@20 -- # IFS=: 00:32:43.250 12:55:02 -- accel/accel.sh@20 -- # read -r var val 00:32:43.250 12:55:02 -- accel/accel.sh@21 -- # val= 00:32:43.250 12:55:02 -- accel/accel.sh@22 -- # case "$var" in 00:32:43.250 12:55:02 -- accel/accel.sh@20 -- # IFS=: 00:32:43.250 12:55:02 -- accel/accel.sh@20 -- # read -r var val 00:32:43.250 12:55:02 -- accel/accel.sh@21 -- # val= 00:32:43.250 12:55:02 -- accel/accel.sh@22 -- # case "$var" in 00:32:43.250 12:55:02 -- accel/accel.sh@20 -- # IFS=: 00:32:43.250 12:55:02 -- accel/accel.sh@20 -- # read -r var val 00:32:43.250 12:55:02 -- accel/accel.sh@21 -- # val= 00:32:43.250 12:55:02 -- accel/accel.sh@22 -- # case "$var" in 00:32:43.250 12:55:02 -- accel/accel.sh@20 -- # IFS=: 00:32:43.250 12:55:02 -- accel/accel.sh@20 -- # read -r var val 00:32:43.250 12:55:02 -- accel/accel.sh@21 -- # val= 00:32:43.250 12:55:02 -- accel/accel.sh@22 -- # case "$var" in 00:32:43.250 12:55:02 -- accel/accel.sh@20 -- # IFS=: 00:32:43.250 12:55:02 -- accel/accel.sh@20 -- # read -r var val 00:32:43.250 12:55:02 -- accel/accel.sh@28 -- # [[ -n software ]] 00:32:43.250 12:55:02 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:32:43.250 12:55:02 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:43.250 00:32:43.250 real 0m2.984s 00:32:43.250 user 0m2.523s 00:32:43.250 sys 0m0.257s 00:32:43.250 12:55:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:43.250 12:55:02 -- common/autotest_common.sh@10 -- # set +x 00:32:43.250 ************************************ 00:32:43.250 END TEST accel_dif_generate_copy 00:32:43.250 ************************************ 00:32:43.250 12:55:02 -- accel/accel.sh@107 -- # [[ y == y ]] 00:32:43.250 12:55:02 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:32:43.250 12:55:02 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:32:43.250 12:55:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:43.250 12:55:02 -- common/autotest_common.sh@10 -- # set +x 00:32:43.250 ************************************ 00:32:43.250 START TEST accel_comp 00:32:43.250 ************************************ 00:32:43.250 12:55:02 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:32:43.250 12:55:02 -- accel/accel.sh@16 -- # local accel_opc 00:32:43.250 12:55:02 -- accel/accel.sh@17 -- # local accel_module 00:32:43.250 12:55:02 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:32:43.250 12:55:02 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:32:43.250 12:55:02 -- accel/accel.sh@12 -- # build_accel_config 00:32:43.250 12:55:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:32:43.250 12:55:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:32:43.250 12:55:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:32:43.250 12:55:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:32:43.250 12:55:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:32:43.250 12:55:02 -- accel/accel.sh@41 -- # local IFS=, 00:32:43.250 12:55:02 -- accel/accel.sh@42 -- # jq -r . 00:32:43.250 [2024-07-22 12:55:02.501671] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:32:43.251 [2024-07-22 12:55:02.501770] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70842 ] 00:32:43.251 [2024-07-22 12:55:02.635916] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:43.510 [2024-07-22 12:55:02.733409] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:44.886 12:55:03 -- accel/accel.sh@18 -- # out='Preparing input file... 00:32:44.886 00:32:44.886 SPDK Configuration: 00:32:44.886 Core mask: 0x1 00:32:44.886 00:32:44.886 Accel Perf Configuration: 00:32:44.886 Workload Type: compress 00:32:44.886 Transfer size: 4096 bytes 00:32:44.886 Vector count 1 00:32:44.886 Module: software 00:32:44.886 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:32:44.886 Queue depth: 32 00:32:44.886 Allocate depth: 32 00:32:44.886 # threads/core: 1 00:32:44.886 Run time: 1 seconds 00:32:44.886 Verify: No 00:32:44.886 00:32:44.886 Running for 1 seconds... 00:32:44.886 00:32:44.886 Core,Thread Transfers Bandwidth Failed Miscompares 00:32:44.886 ------------------------------------------------------------------------------------ 00:32:44.886 0,0 47744/s 199 MiB/s 0 0 00:32:44.886 ==================================================================================== 00:32:44.886 Total 47744/s 186 MiB/s 0 0' 00:32:44.886 12:55:03 -- accel/accel.sh@20 -- # IFS=: 00:32:44.886 12:55:03 -- accel/accel.sh@20 -- # read -r var val 00:32:44.886 12:55:03 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:32:44.886 12:55:03 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:32:44.886 12:55:03 -- accel/accel.sh@12 -- # build_accel_config 00:32:44.886 12:55:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:32:44.886 12:55:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:32:44.886 12:55:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:32:44.886 12:55:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:32:44.886 12:55:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:32:44.886 12:55:03 -- accel/accel.sh@41 -- # local IFS=, 00:32:44.886 12:55:03 -- accel/accel.sh@42 -- # jq -r . 00:32:44.886 [2024-07-22 12:55:03.987464] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:32:44.886 [2024-07-22 12:55:03.987582] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70861 ] 00:32:44.886 [2024-07-22 12:55:04.124105] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:44.886 [2024-07-22 12:55:04.224023] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:44.886 12:55:04 -- accel/accel.sh@21 -- # val= 00:32:44.886 12:55:04 -- accel/accel.sh@22 -- # case "$var" in 00:32:44.886 12:55:04 -- accel/accel.sh@20 -- # IFS=: 00:32:44.886 12:55:04 -- accel/accel.sh@20 -- # read -r var val 00:32:44.886 12:55:04 -- accel/accel.sh@21 -- # val= 00:32:44.886 12:55:04 -- accel/accel.sh@22 -- # case "$var" in 00:32:44.886 12:55:04 -- accel/accel.sh@20 -- # IFS=: 00:32:44.886 12:55:04 -- accel/accel.sh@20 -- # read -r var val 00:32:44.886 12:55:04 -- accel/accel.sh@21 -- # val= 00:32:44.886 12:55:04 -- accel/accel.sh@22 -- # case "$var" in 00:32:44.886 12:55:04 -- accel/accel.sh@20 -- # IFS=: 00:32:44.886 12:55:04 -- accel/accel.sh@20 -- # read -r var val 00:32:44.886 12:55:04 -- accel/accel.sh@21 -- # val=0x1 00:32:44.886 12:55:04 -- accel/accel.sh@22 -- # case "$var" in 00:32:44.886 12:55:04 -- accel/accel.sh@20 -- # IFS=: 00:32:44.886 12:55:04 -- accel/accel.sh@20 -- # read -r var val 00:32:44.886 12:55:04 -- accel/accel.sh@21 -- # val= 00:32:44.886 12:55:04 -- accel/accel.sh@22 -- # case "$var" in 00:32:44.886 12:55:04 -- accel/accel.sh@20 -- # IFS=: 00:32:44.886 12:55:04 -- accel/accel.sh@20 -- # read -r var val 00:32:44.886 12:55:04 -- accel/accel.sh@21 -- # val= 00:32:44.886 12:55:04 -- accel/accel.sh@22 -- # case "$var" in 00:32:44.886 12:55:04 -- accel/accel.sh@20 -- # IFS=: 00:32:44.886 12:55:04 -- accel/accel.sh@20 -- # read -r var val 00:32:44.886 12:55:04 -- accel/accel.sh@21 -- # val=compress 00:32:44.886 12:55:04 -- accel/accel.sh@22 -- # case "$var" in 00:32:44.886 12:55:04 -- accel/accel.sh@24 -- # accel_opc=compress 00:32:44.886 12:55:04 -- accel/accel.sh@20 -- # IFS=: 00:32:44.886 12:55:04 -- accel/accel.sh@20 -- # read -r var val 00:32:44.886 12:55:04 -- accel/accel.sh@21 -- # val='4096 bytes' 00:32:44.886 12:55:04 -- accel/accel.sh@22 -- # case "$var" in 00:32:44.886 12:55:04 -- accel/accel.sh@20 -- # IFS=: 00:32:44.886 12:55:04 -- accel/accel.sh@20 -- # read -r var val 00:32:44.886 12:55:04 -- accel/accel.sh@21 -- # val= 00:32:44.886 12:55:04 -- accel/accel.sh@22 -- # case "$var" in 00:32:44.886 12:55:04 -- accel/accel.sh@20 -- # IFS=: 00:32:44.886 12:55:04 -- accel/accel.sh@20 -- # read -r var val 00:32:44.886 12:55:04 -- accel/accel.sh@21 -- # val=software 00:32:44.886 12:55:04 -- accel/accel.sh@22 -- # case "$var" in 00:32:44.886 12:55:04 -- accel/accel.sh@23 -- # accel_module=software 00:32:44.886 12:55:04 -- accel/accel.sh@20 -- # IFS=: 00:32:44.886 12:55:04 -- accel/accel.sh@20 -- # read -r var val 00:32:44.886 12:55:04 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:32:44.886 12:55:04 -- accel/accel.sh@22 -- # case "$var" in 00:32:44.886 12:55:04 -- accel/accel.sh@20 -- # IFS=: 00:32:44.886 12:55:04 -- accel/accel.sh@20 -- # read -r var val 00:32:44.886 12:55:04 -- accel/accel.sh@21 -- # val=32 00:32:44.886 12:55:04 -- accel/accel.sh@22 -- # case "$var" in 00:32:44.886 12:55:04 -- accel/accel.sh@20 -- # IFS=: 00:32:44.886 12:55:04 -- accel/accel.sh@20 -- # read -r var val 00:32:44.886 12:55:04 -- accel/accel.sh@21 -- # val=32 00:32:44.886 12:55:04 -- accel/accel.sh@22 -- # case "$var" in 00:32:44.886 12:55:04 -- accel/accel.sh@20 -- # IFS=: 00:32:44.886 12:55:04 -- accel/accel.sh@20 -- # read -r var val 00:32:44.886 12:55:04 -- accel/accel.sh@21 -- # val=1 00:32:44.886 12:55:04 -- accel/accel.sh@22 -- # case "$var" in 00:32:44.886 12:55:04 -- accel/accel.sh@20 -- # IFS=: 00:32:44.886 12:55:04 -- accel/accel.sh@20 -- # read -r var val 00:32:44.886 12:55:04 -- accel/accel.sh@21 -- # val='1 seconds' 00:32:44.886 12:55:04 -- accel/accel.sh@22 -- # case "$var" in 00:32:44.886 12:55:04 -- accel/accel.sh@20 -- # IFS=: 00:32:44.886 12:55:04 -- accel/accel.sh@20 -- # read -r var val 00:32:44.886 12:55:04 -- accel/accel.sh@21 -- # val=No 00:32:44.886 12:55:04 -- accel/accel.sh@22 -- # case "$var" in 00:32:44.886 12:55:04 -- accel/accel.sh@20 -- # IFS=: 00:32:44.886 12:55:04 -- accel/accel.sh@20 -- # read -r var val 00:32:44.886 12:55:04 -- accel/accel.sh@21 -- # val= 00:32:44.886 12:55:04 -- accel/accel.sh@22 -- # case "$var" in 00:32:44.886 12:55:04 -- accel/accel.sh@20 -- # IFS=: 00:32:44.886 12:55:04 -- accel/accel.sh@20 -- # read -r var val 00:32:44.886 12:55:04 -- accel/accel.sh@21 -- # val= 00:32:44.886 12:55:04 -- accel/accel.sh@22 -- # case "$var" in 00:32:44.886 12:55:04 -- accel/accel.sh@20 -- # IFS=: 00:32:44.886 12:55:04 -- accel/accel.sh@20 -- # read -r var val 00:32:46.260 12:55:05 -- accel/accel.sh@21 -- # val= 00:32:46.260 12:55:05 -- accel/accel.sh@22 -- # case "$var" in 00:32:46.260 12:55:05 -- accel/accel.sh@20 -- # IFS=: 00:32:46.260 12:55:05 -- accel/accel.sh@20 -- # read -r var val 00:32:46.260 12:55:05 -- accel/accel.sh@21 -- # val= 00:32:46.260 12:55:05 -- accel/accel.sh@22 -- # case "$var" in 00:32:46.260 12:55:05 -- accel/accel.sh@20 -- # IFS=: 00:32:46.260 12:55:05 -- accel/accel.sh@20 -- # read -r var val 00:32:46.260 12:55:05 -- accel/accel.sh@21 -- # val= 00:32:46.260 12:55:05 -- accel/accel.sh@22 -- # case "$var" in 00:32:46.260 12:55:05 -- accel/accel.sh@20 -- # IFS=: 00:32:46.260 12:55:05 -- accel/accel.sh@20 -- # read -r var val 00:32:46.260 12:55:05 -- accel/accel.sh@21 -- # val= 00:32:46.260 12:55:05 -- accel/accel.sh@22 -- # case "$var" in 00:32:46.260 12:55:05 -- accel/accel.sh@20 -- # IFS=: 00:32:46.260 12:55:05 -- accel/accel.sh@20 -- # read -r var val 00:32:46.260 12:55:05 -- accel/accel.sh@21 -- # val= 00:32:46.260 12:55:05 -- accel/accel.sh@22 -- # case "$var" in 00:32:46.260 12:55:05 -- accel/accel.sh@20 -- # IFS=: 00:32:46.260 12:55:05 -- accel/accel.sh@20 -- # read -r var val 00:32:46.260 12:55:05 -- accel/accel.sh@21 -- # val= 00:32:46.260 12:55:05 -- accel/accel.sh@22 -- # case "$var" in 00:32:46.260 12:55:05 -- accel/accel.sh@20 -- # IFS=: 00:32:46.260 12:55:05 -- accel/accel.sh@20 -- # read -r var val 00:32:46.260 12:55:05 -- accel/accel.sh@28 -- # [[ -n software ]] 00:32:46.260 12:55:05 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:32:46.260 12:55:05 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:46.260 00:32:46.260 real 0m2.971s 00:32:46.260 user 0m2.528s 00:32:46.260 sys 0m0.237s 00:32:46.260 12:55:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:46.260 12:55:05 -- common/autotest_common.sh@10 -- # set +x 00:32:46.260 ************************************ 00:32:46.260 END TEST accel_comp 00:32:46.260 ************************************ 00:32:46.260 12:55:05 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:32:46.260 12:55:05 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:32:46.260 12:55:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:46.260 12:55:05 -- common/autotest_common.sh@10 -- # set +x 00:32:46.260 ************************************ 00:32:46.260 START TEST accel_decomp 00:32:46.260 ************************************ 00:32:46.260 12:55:05 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:32:46.260 12:55:05 -- accel/accel.sh@16 -- # local accel_opc 00:32:46.260 12:55:05 -- accel/accel.sh@17 -- # local accel_module 00:32:46.260 12:55:05 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:32:46.260 12:55:05 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:32:46.260 12:55:05 -- accel/accel.sh@12 -- # build_accel_config 00:32:46.260 12:55:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:32:46.260 12:55:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:32:46.260 12:55:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:32:46.260 12:55:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:32:46.260 12:55:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:32:46.260 12:55:05 -- accel/accel.sh@41 -- # local IFS=, 00:32:46.260 12:55:05 -- accel/accel.sh@42 -- # jq -r . 00:32:46.260 [2024-07-22 12:55:05.523434] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:32:46.260 [2024-07-22 12:55:05.523536] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70896 ] 00:32:46.260 [2024-07-22 12:55:05.660882] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:46.518 [2024-07-22 12:55:05.761647] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:47.893 12:55:06 -- accel/accel.sh@18 -- # out='Preparing input file... 00:32:47.893 00:32:47.893 SPDK Configuration: 00:32:47.893 Core mask: 0x1 00:32:47.893 00:32:47.893 Accel Perf Configuration: 00:32:47.893 Workload Type: decompress 00:32:47.893 Transfer size: 4096 bytes 00:32:47.893 Vector count 1 00:32:47.893 Module: software 00:32:47.893 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:32:47.893 Queue depth: 32 00:32:47.893 Allocate depth: 32 00:32:47.893 # threads/core: 1 00:32:47.893 Run time: 1 seconds 00:32:47.893 Verify: Yes 00:32:47.893 00:32:47.893 Running for 1 seconds... 00:32:47.893 00:32:47.893 Core,Thread Transfers Bandwidth Failed Miscompares 00:32:47.893 ------------------------------------------------------------------------------------ 00:32:47.893 0,0 66560/s 122 MiB/s 0 0 00:32:47.893 ==================================================================================== 00:32:47.893 Total 66560/s 260 MiB/s 0 0' 00:32:47.893 12:55:06 -- accel/accel.sh@20 -- # IFS=: 00:32:47.893 12:55:06 -- accel/accel.sh@20 -- # read -r var val 00:32:47.893 12:55:06 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:32:47.893 12:55:06 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:32:47.893 12:55:06 -- accel/accel.sh@12 -- # build_accel_config 00:32:47.893 12:55:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:32:47.893 12:55:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:32:47.893 12:55:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:32:47.893 12:55:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:32:47.893 12:55:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:32:47.893 12:55:06 -- accel/accel.sh@41 -- # local IFS=, 00:32:47.893 12:55:06 -- accel/accel.sh@42 -- # jq -r . 00:32:47.893 [2024-07-22 12:55:07.008829] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:32:47.893 [2024-07-22 12:55:07.008937] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70915 ] 00:32:47.893 [2024-07-22 12:55:07.140270] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:47.893 [2024-07-22 12:55:07.244534] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:47.893 12:55:07 -- accel/accel.sh@21 -- # val= 00:32:47.893 12:55:07 -- accel/accel.sh@22 -- # case "$var" in 00:32:47.893 12:55:07 -- accel/accel.sh@20 -- # IFS=: 00:32:47.893 12:55:07 -- accel/accel.sh@20 -- # read -r var val 00:32:47.893 12:55:07 -- accel/accel.sh@21 -- # val= 00:32:47.893 12:55:07 -- accel/accel.sh@22 -- # case "$var" in 00:32:47.893 12:55:07 -- accel/accel.sh@20 -- # IFS=: 00:32:47.893 12:55:07 -- accel/accel.sh@20 -- # read -r var val 00:32:47.893 12:55:07 -- accel/accel.sh@21 -- # val= 00:32:47.893 12:55:07 -- accel/accel.sh@22 -- # case "$var" in 00:32:47.893 12:55:07 -- accel/accel.sh@20 -- # IFS=: 00:32:47.893 12:55:07 -- accel/accel.sh@20 -- # read -r var val 00:32:47.893 12:55:07 -- accel/accel.sh@21 -- # val=0x1 00:32:47.893 12:55:07 -- accel/accel.sh@22 -- # case "$var" in 00:32:47.893 12:55:07 -- accel/accel.sh@20 -- # IFS=: 00:32:47.893 12:55:07 -- accel/accel.sh@20 -- # read -r var val 00:32:47.893 12:55:07 -- accel/accel.sh@21 -- # val= 00:32:47.893 12:55:07 -- accel/accel.sh@22 -- # case "$var" in 00:32:47.893 12:55:07 -- accel/accel.sh@20 -- # IFS=: 00:32:48.152 12:55:07 -- accel/accel.sh@20 -- # read -r var val 00:32:48.152 12:55:07 -- accel/accel.sh@21 -- # val= 00:32:48.152 12:55:07 -- accel/accel.sh@22 -- # case "$var" in 00:32:48.152 12:55:07 -- accel/accel.sh@20 -- # IFS=: 00:32:48.152 12:55:07 -- accel/accel.sh@20 -- # read -r var val 00:32:48.152 12:55:07 -- accel/accel.sh@21 -- # val=decompress 00:32:48.152 12:55:07 -- accel/accel.sh@22 -- # case "$var" in 00:32:48.152 12:55:07 -- accel/accel.sh@24 -- # accel_opc=decompress 00:32:48.152 12:55:07 -- accel/accel.sh@20 -- # IFS=: 00:32:48.152 12:55:07 -- accel/accel.sh@20 -- # read -r var val 00:32:48.152 12:55:07 -- accel/accel.sh@21 -- # val='4096 bytes' 00:32:48.152 12:55:07 -- accel/accel.sh@22 -- # case "$var" in 00:32:48.152 12:55:07 -- accel/accel.sh@20 -- # IFS=: 00:32:48.152 12:55:07 -- accel/accel.sh@20 -- # read -r var val 00:32:48.152 12:55:07 -- accel/accel.sh@21 -- # val= 00:32:48.152 12:55:07 -- accel/accel.sh@22 -- # case "$var" in 00:32:48.152 12:55:07 -- accel/accel.sh@20 -- # IFS=: 00:32:48.152 12:55:07 -- accel/accel.sh@20 -- # read -r var val 00:32:48.152 12:55:07 -- accel/accel.sh@21 -- # val=software 00:32:48.152 12:55:07 -- accel/accel.sh@22 -- # case "$var" in 00:32:48.152 12:55:07 -- accel/accel.sh@23 -- # accel_module=software 00:32:48.152 12:55:07 -- accel/accel.sh@20 -- # IFS=: 00:32:48.152 12:55:07 -- accel/accel.sh@20 -- # read -r var val 00:32:48.152 12:55:07 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:32:48.152 12:55:07 -- accel/accel.sh@22 -- # case "$var" in 00:32:48.152 12:55:07 -- accel/accel.sh@20 -- # IFS=: 00:32:48.152 12:55:07 -- accel/accel.sh@20 -- # read -r var val 00:32:48.152 12:55:07 -- accel/accel.sh@21 -- # val=32 00:32:48.152 12:55:07 -- accel/accel.sh@22 -- # case "$var" in 00:32:48.152 12:55:07 -- accel/accel.sh@20 -- # IFS=: 00:32:48.152 12:55:07 -- accel/accel.sh@20 -- # read -r var val 00:32:48.152 12:55:07 -- accel/accel.sh@21 -- # val=32 00:32:48.152 12:55:07 -- accel/accel.sh@22 -- # case "$var" in 00:32:48.152 12:55:07 -- accel/accel.sh@20 -- # IFS=: 00:32:48.152 12:55:07 -- accel/accel.sh@20 -- # read -r var val 00:32:48.152 12:55:07 -- accel/accel.sh@21 -- # val=1 00:32:48.152 12:55:07 -- accel/accel.sh@22 -- # case "$var" in 00:32:48.152 12:55:07 -- accel/accel.sh@20 -- # IFS=: 00:32:48.152 12:55:07 -- accel/accel.sh@20 -- # read -r var val 00:32:48.152 12:55:07 -- accel/accel.sh@21 -- # val='1 seconds' 00:32:48.152 12:55:07 -- accel/accel.sh@22 -- # case "$var" in 00:32:48.152 12:55:07 -- accel/accel.sh@20 -- # IFS=: 00:32:48.152 12:55:07 -- accel/accel.sh@20 -- # read -r var val 00:32:48.152 12:55:07 -- accel/accel.sh@21 -- # val=Yes 00:32:48.152 12:55:07 -- accel/accel.sh@22 -- # case "$var" in 00:32:48.152 12:55:07 -- accel/accel.sh@20 -- # IFS=: 00:32:48.152 12:55:07 -- accel/accel.sh@20 -- # read -r var val 00:32:48.152 12:55:07 -- accel/accel.sh@21 -- # val= 00:32:48.152 12:55:07 -- accel/accel.sh@22 -- # case "$var" in 00:32:48.152 12:55:07 -- accel/accel.sh@20 -- # IFS=: 00:32:48.152 12:55:07 -- accel/accel.sh@20 -- # read -r var val 00:32:48.152 12:55:07 -- accel/accel.sh@21 -- # val= 00:32:48.152 12:55:07 -- accel/accel.sh@22 -- # case "$var" in 00:32:48.152 12:55:07 -- accel/accel.sh@20 -- # IFS=: 00:32:48.152 12:55:07 -- accel/accel.sh@20 -- # read -r var val 00:32:49.087 12:55:08 -- accel/accel.sh@21 -- # val= 00:32:49.087 12:55:08 -- accel/accel.sh@22 -- # case "$var" in 00:32:49.087 12:55:08 -- accel/accel.sh@20 -- # IFS=: 00:32:49.087 12:55:08 -- accel/accel.sh@20 -- # read -r var val 00:32:49.087 12:55:08 -- accel/accel.sh@21 -- # val= 00:32:49.087 12:55:08 -- accel/accel.sh@22 -- # case "$var" in 00:32:49.087 12:55:08 -- accel/accel.sh@20 -- # IFS=: 00:32:49.087 12:55:08 -- accel/accel.sh@20 -- # read -r var val 00:32:49.087 12:55:08 -- accel/accel.sh@21 -- # val= 00:32:49.087 12:55:08 -- accel/accel.sh@22 -- # case "$var" in 00:32:49.087 12:55:08 -- accel/accel.sh@20 -- # IFS=: 00:32:49.087 12:55:08 -- accel/accel.sh@20 -- # read -r var val 00:32:49.087 12:55:08 -- accel/accel.sh@21 -- # val= 00:32:49.087 12:55:08 -- accel/accel.sh@22 -- # case "$var" in 00:32:49.087 12:55:08 -- accel/accel.sh@20 -- # IFS=: 00:32:49.087 12:55:08 -- accel/accel.sh@20 -- # read -r var val 00:32:49.087 12:55:08 -- accel/accel.sh@21 -- # val= 00:32:49.087 12:55:08 -- accel/accel.sh@22 -- # case "$var" in 00:32:49.087 12:55:08 -- accel/accel.sh@20 -- # IFS=: 00:32:49.087 12:55:08 -- accel/accel.sh@20 -- # read -r var val 00:32:49.087 12:55:08 -- accel/accel.sh@21 -- # val= 00:32:49.087 ************************************ 00:32:49.087 END TEST accel_decomp 00:32:49.087 ************************************ 00:32:49.087 12:55:08 -- accel/accel.sh@22 -- # case "$var" in 00:32:49.087 12:55:08 -- accel/accel.sh@20 -- # IFS=: 00:32:49.087 12:55:08 -- accel/accel.sh@20 -- # read -r var val 00:32:49.087 12:55:08 -- accel/accel.sh@28 -- # [[ -n software ]] 00:32:49.087 12:55:08 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:32:49.087 12:55:08 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:49.087 00:32:49.087 real 0m2.969s 00:32:49.087 user 0m2.534s 00:32:49.087 sys 0m0.232s 00:32:49.087 12:55:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:49.087 12:55:08 -- common/autotest_common.sh@10 -- # set +x 00:32:49.346 12:55:08 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:32:49.346 12:55:08 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:32:49.347 12:55:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:49.347 12:55:08 -- common/autotest_common.sh@10 -- # set +x 00:32:49.347 ************************************ 00:32:49.347 START TEST accel_decmop_full 00:32:49.347 ************************************ 00:32:49.347 12:55:08 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:32:49.347 12:55:08 -- accel/accel.sh@16 -- # local accel_opc 00:32:49.347 12:55:08 -- accel/accel.sh@17 -- # local accel_module 00:32:49.347 12:55:08 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:32:49.347 12:55:08 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:32:49.347 12:55:08 -- accel/accel.sh@12 -- # build_accel_config 00:32:49.347 12:55:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:32:49.347 12:55:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:32:49.347 12:55:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:32:49.347 12:55:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:32:49.347 12:55:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:32:49.347 12:55:08 -- accel/accel.sh@41 -- # local IFS=, 00:32:49.347 12:55:08 -- accel/accel.sh@42 -- # jq -r . 00:32:49.347 [2024-07-22 12:55:08.542295] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:32:49.347 [2024-07-22 12:55:08.542381] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70950 ] 00:32:49.347 [2024-07-22 12:55:08.680876] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:49.605 [2024-07-22 12:55:08.771963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:50.980 12:55:09 -- accel/accel.sh@18 -- # out='Preparing input file... 00:32:50.980 00:32:50.980 SPDK Configuration: 00:32:50.980 Core mask: 0x1 00:32:50.980 00:32:50.980 Accel Perf Configuration: 00:32:50.980 Workload Type: decompress 00:32:50.980 Transfer size: 111250 bytes 00:32:50.980 Vector count 1 00:32:50.980 Module: software 00:32:50.980 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:32:50.980 Queue depth: 32 00:32:50.980 Allocate depth: 32 00:32:50.980 # threads/core: 1 00:32:50.980 Run time: 1 seconds 00:32:50.980 Verify: Yes 00:32:50.980 00:32:50.980 Running for 1 seconds... 00:32:50.980 00:32:50.980 Core,Thread Transfers Bandwidth Failed Miscompares 00:32:50.980 ------------------------------------------------------------------------------------ 00:32:50.980 0,0 4544/s 187 MiB/s 0 0 00:32:50.980 ==================================================================================== 00:32:50.980 Total 4544/s 482 MiB/s 0 0' 00:32:50.980 12:55:09 -- accel/accel.sh@20 -- # IFS=: 00:32:50.980 12:55:09 -- accel/accel.sh@20 -- # read -r var val 00:32:50.980 12:55:09 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:32:50.980 12:55:09 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:32:50.980 12:55:09 -- accel/accel.sh@12 -- # build_accel_config 00:32:50.980 12:55:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:32:50.980 12:55:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:32:50.980 12:55:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:32:50.980 12:55:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:32:50.980 12:55:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:32:50.980 12:55:09 -- accel/accel.sh@41 -- # local IFS=, 00:32:50.980 12:55:09 -- accel/accel.sh@42 -- # jq -r . 00:32:50.980 [2024-07-22 12:55:10.017390] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:32:50.980 [2024-07-22 12:55:10.017517] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70964 ] 00:32:50.980 [2024-07-22 12:55:10.159522] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:50.980 [2024-07-22 12:55:10.265001] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:50.980 12:55:10 -- accel/accel.sh@21 -- # val= 00:32:50.980 12:55:10 -- accel/accel.sh@22 -- # case "$var" in 00:32:50.980 12:55:10 -- accel/accel.sh@20 -- # IFS=: 00:32:50.980 12:55:10 -- accel/accel.sh@20 -- # read -r var val 00:32:50.980 12:55:10 -- accel/accel.sh@21 -- # val= 00:32:50.980 12:55:10 -- accel/accel.sh@22 -- # case "$var" in 00:32:50.980 12:55:10 -- accel/accel.sh@20 -- # IFS=: 00:32:50.980 12:55:10 -- accel/accel.sh@20 -- # read -r var val 00:32:50.980 12:55:10 -- accel/accel.sh@21 -- # val= 00:32:50.980 12:55:10 -- accel/accel.sh@22 -- # case "$var" in 00:32:50.980 12:55:10 -- accel/accel.sh@20 -- # IFS=: 00:32:50.980 12:55:10 -- accel/accel.sh@20 -- # read -r var val 00:32:50.980 12:55:10 -- accel/accel.sh@21 -- # val=0x1 00:32:50.980 12:55:10 -- accel/accel.sh@22 -- # case "$var" in 00:32:50.980 12:55:10 -- accel/accel.sh@20 -- # IFS=: 00:32:50.980 12:55:10 -- accel/accel.sh@20 -- # read -r var val 00:32:50.980 12:55:10 -- accel/accel.sh@21 -- # val= 00:32:50.980 12:55:10 -- accel/accel.sh@22 -- # case "$var" in 00:32:50.980 12:55:10 -- accel/accel.sh@20 -- # IFS=: 00:32:50.980 12:55:10 -- accel/accel.sh@20 -- # read -r var val 00:32:50.980 12:55:10 -- accel/accel.sh@21 -- # val= 00:32:50.980 12:55:10 -- accel/accel.sh@22 -- # case "$var" in 00:32:50.980 12:55:10 -- accel/accel.sh@20 -- # IFS=: 00:32:50.980 12:55:10 -- accel/accel.sh@20 -- # read -r var val 00:32:50.980 12:55:10 -- accel/accel.sh@21 -- # val=decompress 00:32:50.980 12:55:10 -- accel/accel.sh@22 -- # case "$var" in 00:32:50.980 12:55:10 -- accel/accel.sh@24 -- # accel_opc=decompress 00:32:50.980 12:55:10 -- accel/accel.sh@20 -- # IFS=: 00:32:50.980 12:55:10 -- accel/accel.sh@20 -- # read -r var val 00:32:50.980 12:55:10 -- accel/accel.sh@21 -- # val='111250 bytes' 00:32:50.980 12:55:10 -- accel/accel.sh@22 -- # case "$var" in 00:32:50.980 12:55:10 -- accel/accel.sh@20 -- # IFS=: 00:32:50.980 12:55:10 -- accel/accel.sh@20 -- # read -r var val 00:32:50.980 12:55:10 -- accel/accel.sh@21 -- # val= 00:32:50.980 12:55:10 -- accel/accel.sh@22 -- # case "$var" in 00:32:50.980 12:55:10 -- accel/accel.sh@20 -- # IFS=: 00:32:50.980 12:55:10 -- accel/accel.sh@20 -- # read -r var val 00:32:50.980 12:55:10 -- accel/accel.sh@21 -- # val=software 00:32:50.980 12:55:10 -- accel/accel.sh@22 -- # case "$var" in 00:32:50.980 12:55:10 -- accel/accel.sh@23 -- # accel_module=software 00:32:50.980 12:55:10 -- accel/accel.sh@20 -- # IFS=: 00:32:50.980 12:55:10 -- accel/accel.sh@20 -- # read -r var val 00:32:50.980 12:55:10 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:32:50.980 12:55:10 -- accel/accel.sh@22 -- # case "$var" in 00:32:50.980 12:55:10 -- accel/accel.sh@20 -- # IFS=: 00:32:50.980 12:55:10 -- accel/accel.sh@20 -- # read -r var val 00:32:50.980 12:55:10 -- accel/accel.sh@21 -- # val=32 00:32:50.980 12:55:10 -- accel/accel.sh@22 -- # case "$var" in 00:32:50.980 12:55:10 -- accel/accel.sh@20 -- # IFS=: 00:32:50.980 12:55:10 -- accel/accel.sh@20 -- # read -r var val 00:32:50.980 12:55:10 -- accel/accel.sh@21 -- # val=32 00:32:50.980 12:55:10 -- accel/accel.sh@22 -- # case "$var" in 00:32:50.980 12:55:10 -- accel/accel.sh@20 -- # IFS=: 00:32:50.980 12:55:10 -- accel/accel.sh@20 -- # read -r var val 00:32:50.980 12:55:10 -- accel/accel.sh@21 -- # val=1 00:32:50.980 12:55:10 -- accel/accel.sh@22 -- # case "$var" in 00:32:50.980 12:55:10 -- accel/accel.sh@20 -- # IFS=: 00:32:50.980 12:55:10 -- accel/accel.sh@20 -- # read -r var val 00:32:50.980 12:55:10 -- accel/accel.sh@21 -- # val='1 seconds' 00:32:50.980 12:55:10 -- accel/accel.sh@22 -- # case "$var" in 00:32:50.980 12:55:10 -- accel/accel.sh@20 -- # IFS=: 00:32:50.980 12:55:10 -- accel/accel.sh@20 -- # read -r var val 00:32:50.980 12:55:10 -- accel/accel.sh@21 -- # val=Yes 00:32:50.980 12:55:10 -- accel/accel.sh@22 -- # case "$var" in 00:32:50.980 12:55:10 -- accel/accel.sh@20 -- # IFS=: 00:32:50.980 12:55:10 -- accel/accel.sh@20 -- # read -r var val 00:32:50.980 12:55:10 -- accel/accel.sh@21 -- # val= 00:32:50.980 12:55:10 -- accel/accel.sh@22 -- # case "$var" in 00:32:50.980 12:55:10 -- accel/accel.sh@20 -- # IFS=: 00:32:50.980 12:55:10 -- accel/accel.sh@20 -- # read -r var val 00:32:50.980 12:55:10 -- accel/accel.sh@21 -- # val= 00:32:50.980 12:55:10 -- accel/accel.sh@22 -- # case "$var" in 00:32:50.980 12:55:10 -- accel/accel.sh@20 -- # IFS=: 00:32:50.980 12:55:10 -- accel/accel.sh@20 -- # read -r var val 00:32:52.355 12:55:11 -- accel/accel.sh@21 -- # val= 00:32:52.355 12:55:11 -- accel/accel.sh@22 -- # case "$var" in 00:32:52.355 12:55:11 -- accel/accel.sh@20 -- # IFS=: 00:32:52.355 12:55:11 -- accel/accel.sh@20 -- # read -r var val 00:32:52.355 12:55:11 -- accel/accel.sh@21 -- # val= 00:32:52.355 12:55:11 -- accel/accel.sh@22 -- # case "$var" in 00:32:52.355 12:55:11 -- accel/accel.sh@20 -- # IFS=: 00:32:52.355 12:55:11 -- accel/accel.sh@20 -- # read -r var val 00:32:52.355 12:55:11 -- accel/accel.sh@21 -- # val= 00:32:52.355 12:55:11 -- accel/accel.sh@22 -- # case "$var" in 00:32:52.355 12:55:11 -- accel/accel.sh@20 -- # IFS=: 00:32:52.355 12:55:11 -- accel/accel.sh@20 -- # read -r var val 00:32:52.355 12:55:11 -- accel/accel.sh@21 -- # val= 00:32:52.355 12:55:11 -- accel/accel.sh@22 -- # case "$var" in 00:32:52.355 12:55:11 -- accel/accel.sh@20 -- # IFS=: 00:32:52.355 12:55:11 -- accel/accel.sh@20 -- # read -r var val 00:32:52.355 12:55:11 -- accel/accel.sh@21 -- # val= 00:32:52.355 12:55:11 -- accel/accel.sh@22 -- # case "$var" in 00:32:52.355 12:55:11 -- accel/accel.sh@20 -- # IFS=: 00:32:52.355 12:55:11 -- accel/accel.sh@20 -- # read -r var val 00:32:52.355 12:55:11 -- accel/accel.sh@21 -- # val= 00:32:52.355 12:55:11 -- accel/accel.sh@22 -- # case "$var" in 00:32:52.355 12:55:11 -- accel/accel.sh@20 -- # IFS=: 00:32:52.355 12:55:11 -- accel/accel.sh@20 -- # read -r var val 00:32:52.355 12:55:11 -- accel/accel.sh@28 -- # [[ -n software ]] 00:32:52.355 12:55:11 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:32:52.355 12:55:11 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:52.355 00:32:52.355 real 0m2.980s 00:32:52.355 user 0m2.536s 00:32:52.355 sys 0m0.239s 00:32:52.355 12:55:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:52.355 12:55:11 -- common/autotest_common.sh@10 -- # set +x 00:32:52.355 ************************************ 00:32:52.355 END TEST accel_decmop_full 00:32:52.355 ************************************ 00:32:52.355 12:55:11 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:32:52.355 12:55:11 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:32:52.355 12:55:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:52.355 12:55:11 -- common/autotest_common.sh@10 -- # set +x 00:32:52.355 ************************************ 00:32:52.355 START TEST accel_decomp_mcore 00:32:52.355 ************************************ 00:32:52.355 12:55:11 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:32:52.355 12:55:11 -- accel/accel.sh@16 -- # local accel_opc 00:32:52.355 12:55:11 -- accel/accel.sh@17 -- # local accel_module 00:32:52.355 12:55:11 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:32:52.355 12:55:11 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:32:52.355 12:55:11 -- accel/accel.sh@12 -- # build_accel_config 00:32:52.355 12:55:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:32:52.355 12:55:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:32:52.355 12:55:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:32:52.355 12:55:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:32:52.355 12:55:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:32:52.355 12:55:11 -- accel/accel.sh@41 -- # local IFS=, 00:32:52.355 12:55:11 -- accel/accel.sh@42 -- # jq -r . 00:32:52.355 [2024-07-22 12:55:11.572779] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:32:52.355 [2024-07-22 12:55:11.572941] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71004 ] 00:32:52.355 [2024-07-22 12:55:11.723570] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:52.613 [2024-07-22 12:55:11.831844] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:52.613 [2024-07-22 12:55:11.831998] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:32:52.613 [2024-07-22 12:55:11.832058] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:32:52.613 [2024-07-22 12:55:11.832448] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:53.988 12:55:13 -- accel/accel.sh@18 -- # out='Preparing input file... 00:32:53.988 00:32:53.988 SPDK Configuration: 00:32:53.988 Core mask: 0xf 00:32:53.988 00:32:53.988 Accel Perf Configuration: 00:32:53.988 Workload Type: decompress 00:32:53.988 Transfer size: 4096 bytes 00:32:53.988 Vector count 1 00:32:53.988 Module: software 00:32:53.988 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:32:53.988 Queue depth: 32 00:32:53.988 Allocate depth: 32 00:32:53.988 # threads/core: 1 00:32:53.988 Run time: 1 seconds 00:32:53.988 Verify: Yes 00:32:53.988 00:32:53.988 Running for 1 seconds... 00:32:53.988 00:32:53.988 Core,Thread Transfers Bandwidth Failed Miscompares 00:32:53.988 ------------------------------------------------------------------------------------ 00:32:53.988 0,0 56608/s 104 MiB/s 0 0 00:32:53.988 3,0 55744/s 102 MiB/s 0 0 00:32:53.988 2,0 57280/s 105 MiB/s 0 0 00:32:53.988 1,0 57152/s 105 MiB/s 0 0 00:32:53.988 ==================================================================================== 00:32:53.988 Total 226784/s 885 MiB/s 0 0' 00:32:53.988 12:55:13 -- accel/accel.sh@20 -- # IFS=: 00:32:53.988 12:55:13 -- accel/accel.sh@20 -- # read -r var val 00:32:53.988 12:55:13 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:32:53.988 12:55:13 -- accel/accel.sh@12 -- # build_accel_config 00:32:53.988 12:55:13 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:32:53.988 12:55:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:32:53.988 12:55:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:32:53.988 12:55:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:32:53.988 12:55:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:32:53.988 12:55:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:32:53.988 12:55:13 -- accel/accel.sh@41 -- # local IFS=, 00:32:53.988 12:55:13 -- accel/accel.sh@42 -- # jq -r . 00:32:53.988 [2024-07-22 12:55:13.117380] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:32:53.988 [2024-07-22 12:55:13.117525] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71021 ] 00:32:53.988 [2024-07-22 12:55:13.258586] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:53.988 [2024-07-22 12:55:13.361987] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:53.988 [2024-07-22 12:55:13.362170] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:32:53.988 [2024-07-22 12:55:13.362272] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:32:53.988 [2024-07-22 12:55:13.362553] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:54.246 12:55:13 -- accel/accel.sh@21 -- # val= 00:32:54.246 12:55:13 -- accel/accel.sh@22 -- # case "$var" in 00:32:54.246 12:55:13 -- accel/accel.sh@20 -- # IFS=: 00:32:54.246 12:55:13 -- accel/accel.sh@20 -- # read -r var val 00:32:54.246 12:55:13 -- accel/accel.sh@21 -- # val= 00:32:54.246 12:55:13 -- accel/accel.sh@22 -- # case "$var" in 00:32:54.246 12:55:13 -- accel/accel.sh@20 -- # IFS=: 00:32:54.246 12:55:13 -- accel/accel.sh@20 -- # read -r var val 00:32:54.246 12:55:13 -- accel/accel.sh@21 -- # val= 00:32:54.246 12:55:13 -- accel/accel.sh@22 -- # case "$var" in 00:32:54.246 12:55:13 -- accel/accel.sh@20 -- # IFS=: 00:32:54.246 12:55:13 -- accel/accel.sh@20 -- # read -r var val 00:32:54.246 12:55:13 -- accel/accel.sh@21 -- # val=0xf 00:32:54.246 12:55:13 -- accel/accel.sh@22 -- # case "$var" in 00:32:54.246 12:55:13 -- accel/accel.sh@20 -- # IFS=: 00:32:54.246 12:55:13 -- accel/accel.sh@20 -- # read -r var val 00:32:54.246 12:55:13 -- accel/accel.sh@21 -- # val= 00:32:54.246 12:55:13 -- accel/accel.sh@22 -- # case "$var" in 00:32:54.246 12:55:13 -- accel/accel.sh@20 -- # IFS=: 00:32:54.246 12:55:13 -- accel/accel.sh@20 -- # read -r var val 00:32:54.246 12:55:13 -- accel/accel.sh@21 -- # val= 00:32:54.246 12:55:13 -- accel/accel.sh@22 -- # case "$var" in 00:32:54.246 12:55:13 -- accel/accel.sh@20 -- # IFS=: 00:32:54.246 12:55:13 -- accel/accel.sh@20 -- # read -r var val 00:32:54.246 12:55:13 -- accel/accel.sh@21 -- # val=decompress 00:32:54.246 12:55:13 -- accel/accel.sh@22 -- # case "$var" in 00:32:54.246 12:55:13 -- accel/accel.sh@24 -- # accel_opc=decompress 00:32:54.246 12:55:13 -- accel/accel.sh@20 -- # IFS=: 00:32:54.246 12:55:13 -- accel/accel.sh@20 -- # read -r var val 00:32:54.246 12:55:13 -- accel/accel.sh@21 -- # val='4096 bytes' 00:32:54.246 12:55:13 -- accel/accel.sh@22 -- # case "$var" in 00:32:54.246 12:55:13 -- accel/accel.sh@20 -- # IFS=: 00:32:54.246 12:55:13 -- accel/accel.sh@20 -- # read -r var val 00:32:54.246 12:55:13 -- accel/accel.sh@21 -- # val= 00:32:54.246 12:55:13 -- accel/accel.sh@22 -- # case "$var" in 00:32:54.246 12:55:13 -- accel/accel.sh@20 -- # IFS=: 00:32:54.246 12:55:13 -- accel/accel.sh@20 -- # read -r var val 00:32:54.246 12:55:13 -- accel/accel.sh@21 -- # val=software 00:32:54.246 12:55:13 -- accel/accel.sh@22 -- # case "$var" in 00:32:54.246 12:55:13 -- accel/accel.sh@23 -- # accel_module=software 00:32:54.246 12:55:13 -- accel/accel.sh@20 -- # IFS=: 00:32:54.246 12:55:13 -- accel/accel.sh@20 -- # read -r var val 00:32:54.246 12:55:13 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:32:54.246 12:55:13 -- accel/accel.sh@22 -- # case "$var" in 00:32:54.246 12:55:13 -- accel/accel.sh@20 -- # IFS=: 00:32:54.246 12:55:13 -- accel/accel.sh@20 -- # read -r var val 00:32:54.246 12:55:13 -- accel/accel.sh@21 -- # val=32 00:32:54.246 12:55:13 -- accel/accel.sh@22 -- # case "$var" in 00:32:54.246 12:55:13 -- accel/accel.sh@20 -- # IFS=: 00:32:54.246 12:55:13 -- accel/accel.sh@20 -- # read -r var val 00:32:54.246 12:55:13 -- accel/accel.sh@21 -- # val=32 00:32:54.246 12:55:13 -- accel/accel.sh@22 -- # case "$var" in 00:32:54.246 12:55:13 -- accel/accel.sh@20 -- # IFS=: 00:32:54.246 12:55:13 -- accel/accel.sh@20 -- # read -r var val 00:32:54.246 12:55:13 -- accel/accel.sh@21 -- # val=1 00:32:54.246 12:55:13 -- accel/accel.sh@22 -- # case "$var" in 00:32:54.246 12:55:13 -- accel/accel.sh@20 -- # IFS=: 00:32:54.246 12:55:13 -- accel/accel.sh@20 -- # read -r var val 00:32:54.246 12:55:13 -- accel/accel.sh@21 -- # val='1 seconds' 00:32:54.246 12:55:13 -- accel/accel.sh@22 -- # case "$var" in 00:32:54.246 12:55:13 -- accel/accel.sh@20 -- # IFS=: 00:32:54.246 12:55:13 -- accel/accel.sh@20 -- # read -r var val 00:32:54.246 12:55:13 -- accel/accel.sh@21 -- # val=Yes 00:32:54.246 12:55:13 -- accel/accel.sh@22 -- # case "$var" in 00:32:54.246 12:55:13 -- accel/accel.sh@20 -- # IFS=: 00:32:54.246 12:55:13 -- accel/accel.sh@20 -- # read -r var val 00:32:54.246 12:55:13 -- accel/accel.sh@21 -- # val= 00:32:54.246 12:55:13 -- accel/accel.sh@22 -- # case "$var" in 00:32:54.246 12:55:13 -- accel/accel.sh@20 -- # IFS=: 00:32:54.246 12:55:13 -- accel/accel.sh@20 -- # read -r var val 00:32:54.246 12:55:13 -- accel/accel.sh@21 -- # val= 00:32:54.246 12:55:13 -- accel/accel.sh@22 -- # case "$var" in 00:32:54.246 12:55:13 -- accel/accel.sh@20 -- # IFS=: 00:32:54.246 12:55:13 -- accel/accel.sh@20 -- # read -r var val 00:32:55.181 12:55:14 -- accel/accel.sh@21 -- # val= 00:32:55.181 12:55:14 -- accel/accel.sh@22 -- # case "$var" in 00:32:55.181 12:55:14 -- accel/accel.sh@20 -- # IFS=: 00:32:55.181 12:55:14 -- accel/accel.sh@20 -- # read -r var val 00:32:55.181 12:55:14 -- accel/accel.sh@21 -- # val= 00:32:55.181 12:55:14 -- accel/accel.sh@22 -- # case "$var" in 00:32:55.181 12:55:14 -- accel/accel.sh@20 -- # IFS=: 00:32:55.181 12:55:14 -- accel/accel.sh@20 -- # read -r var val 00:32:55.181 12:55:14 -- accel/accel.sh@21 -- # val= 00:32:55.181 12:55:14 -- accel/accel.sh@22 -- # case "$var" in 00:32:55.181 12:55:14 -- accel/accel.sh@20 -- # IFS=: 00:32:55.181 12:55:14 -- accel/accel.sh@20 -- # read -r var val 00:32:55.181 12:55:14 -- accel/accel.sh@21 -- # val= 00:32:55.181 12:55:14 -- accel/accel.sh@22 -- # case "$var" in 00:32:55.181 12:55:14 -- accel/accel.sh@20 -- # IFS=: 00:32:55.181 12:55:14 -- accel/accel.sh@20 -- # read -r var val 00:32:55.181 12:55:14 -- accel/accel.sh@21 -- # val= 00:32:55.181 12:55:14 -- accel/accel.sh@22 -- # case "$var" in 00:32:55.181 12:55:14 -- accel/accel.sh@20 -- # IFS=: 00:32:55.181 12:55:14 -- accel/accel.sh@20 -- # read -r var val 00:32:55.181 12:55:14 -- accel/accel.sh@21 -- # val= 00:32:55.181 12:55:14 -- accel/accel.sh@22 -- # case "$var" in 00:32:55.181 12:55:14 -- accel/accel.sh@20 -- # IFS=: 00:32:55.181 12:55:14 -- accel/accel.sh@20 -- # read -r var val 00:32:55.181 12:55:14 -- accel/accel.sh@21 -- # val= 00:32:55.181 12:55:14 -- accel/accel.sh@22 -- # case "$var" in 00:32:55.181 12:55:14 -- accel/accel.sh@20 -- # IFS=: 00:32:55.181 12:55:14 -- accel/accel.sh@20 -- # read -r var val 00:32:55.181 12:55:14 -- accel/accel.sh@21 -- # val= 00:32:55.181 12:55:14 -- accel/accel.sh@22 -- # case "$var" in 00:32:55.181 12:55:14 -- accel/accel.sh@20 -- # IFS=: 00:32:55.181 12:55:14 -- accel/accel.sh@20 -- # read -r var val 00:32:55.181 12:55:14 -- accel/accel.sh@21 -- # val= 00:32:55.181 12:55:14 -- accel/accel.sh@22 -- # case "$var" in 00:32:55.181 12:55:14 -- accel/accel.sh@20 -- # IFS=: 00:32:55.181 12:55:14 -- accel/accel.sh@20 -- # read -r var val 00:32:55.181 12:55:14 -- accel/accel.sh@28 -- # [[ -n software ]] 00:32:55.181 12:55:14 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:32:55.181 ************************************ 00:32:55.181 END TEST accel_decomp_mcore 00:32:55.181 ************************************ 00:32:55.181 12:55:14 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:55.181 00:32:55.181 real 0m3.057s 00:32:55.181 user 0m9.510s 00:32:55.181 sys 0m0.278s 00:32:55.181 12:55:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:55.181 12:55:14 -- common/autotest_common.sh@10 -- # set +x 00:32:55.456 12:55:14 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:32:55.456 12:55:14 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:32:55.456 12:55:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:55.456 12:55:14 -- common/autotest_common.sh@10 -- # set +x 00:32:55.456 ************************************ 00:32:55.456 START TEST accel_decomp_full_mcore 00:32:55.456 ************************************ 00:32:55.456 12:55:14 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:32:55.456 12:55:14 -- accel/accel.sh@16 -- # local accel_opc 00:32:55.456 12:55:14 -- accel/accel.sh@17 -- # local accel_module 00:32:55.456 12:55:14 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:32:55.456 12:55:14 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:32:55.456 12:55:14 -- accel/accel.sh@12 -- # build_accel_config 00:32:55.456 12:55:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:32:55.456 12:55:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:32:55.456 12:55:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:32:55.456 12:55:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:32:55.456 12:55:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:32:55.456 12:55:14 -- accel/accel.sh@41 -- # local IFS=, 00:32:55.456 12:55:14 -- accel/accel.sh@42 -- # jq -r . 00:32:55.456 [2024-07-22 12:55:14.671607] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:32:55.456 [2024-07-22 12:55:14.671703] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71064 ] 00:32:55.456 [2024-07-22 12:55:14.807024] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:55.729 [2024-07-22 12:55:14.910735] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:55.729 [2024-07-22 12:55:14.910850] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:32:55.729 [2024-07-22 12:55:14.911011] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:55.729 [2024-07-22 12:55:14.911009] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:32:57.103 12:55:16 -- accel/accel.sh@18 -- # out='Preparing input file... 00:32:57.103 00:32:57.103 SPDK Configuration: 00:32:57.103 Core mask: 0xf 00:32:57.103 00:32:57.103 Accel Perf Configuration: 00:32:57.103 Workload Type: decompress 00:32:57.103 Transfer size: 111250 bytes 00:32:57.103 Vector count 1 00:32:57.103 Module: software 00:32:57.103 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:32:57.103 Queue depth: 32 00:32:57.103 Allocate depth: 32 00:32:57.103 # threads/core: 1 00:32:57.103 Run time: 1 seconds 00:32:57.103 Verify: Yes 00:32:57.103 00:32:57.103 Running for 1 seconds... 00:32:57.103 00:32:57.103 Core,Thread Transfers Bandwidth Failed Miscompares 00:32:57.103 ------------------------------------------------------------------------------------ 00:32:57.103 0,0 4416/s 182 MiB/s 0 0 00:32:57.103 3,0 4160/s 171 MiB/s 0 0 00:32:57.103 2,0 4384/s 181 MiB/s 0 0 00:32:57.103 1,0 4288/s 177 MiB/s 0 0 00:32:57.103 ==================================================================================== 00:32:57.103 Total 17248/s 1829 MiB/s 0 0' 00:32:57.103 12:55:16 -- accel/accel.sh@20 -- # IFS=: 00:32:57.103 12:55:16 -- accel/accel.sh@20 -- # read -r var val 00:32:57.103 12:55:16 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:32:57.103 12:55:16 -- accel/accel.sh@12 -- # build_accel_config 00:32:57.103 12:55:16 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:32:57.103 12:55:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:32:57.103 12:55:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:32:57.103 12:55:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:32:57.103 12:55:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:32:57.103 12:55:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:32:57.103 12:55:16 -- accel/accel.sh@41 -- # local IFS=, 00:32:57.103 12:55:16 -- accel/accel.sh@42 -- # jq -r . 00:32:57.103 [2024-07-22 12:55:16.177059] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:32:57.103 [2024-07-22 12:55:16.177385] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71081 ] 00:32:57.103 [2024-07-22 12:55:16.314006] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:57.103 [2024-07-22 12:55:16.411377] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:57.103 [2024-07-22 12:55:16.411445] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:32:57.103 [2024-07-22 12:55:16.411663] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:57.103 [2024-07-22 12:55:16.411665] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:32:57.103 12:55:16 -- accel/accel.sh@21 -- # val= 00:32:57.103 12:55:16 -- accel/accel.sh@22 -- # case "$var" in 00:32:57.103 12:55:16 -- accel/accel.sh@20 -- # IFS=: 00:32:57.103 12:55:16 -- accel/accel.sh@20 -- # read -r var val 00:32:57.103 12:55:16 -- accel/accel.sh@21 -- # val= 00:32:57.103 12:55:16 -- accel/accel.sh@22 -- # case "$var" in 00:32:57.103 12:55:16 -- accel/accel.sh@20 -- # IFS=: 00:32:57.103 12:55:16 -- accel/accel.sh@20 -- # read -r var val 00:32:57.103 12:55:16 -- accel/accel.sh@21 -- # val= 00:32:57.103 12:55:16 -- accel/accel.sh@22 -- # case "$var" in 00:32:57.103 12:55:16 -- accel/accel.sh@20 -- # IFS=: 00:32:57.103 12:55:16 -- accel/accel.sh@20 -- # read -r var val 00:32:57.103 12:55:16 -- accel/accel.sh@21 -- # val=0xf 00:32:57.103 12:55:16 -- accel/accel.sh@22 -- # case "$var" in 00:32:57.103 12:55:16 -- accel/accel.sh@20 -- # IFS=: 00:32:57.103 12:55:16 -- accel/accel.sh@20 -- # read -r var val 00:32:57.103 12:55:16 -- accel/accel.sh@21 -- # val= 00:32:57.103 12:55:16 -- accel/accel.sh@22 -- # case "$var" in 00:32:57.103 12:55:16 -- accel/accel.sh@20 -- # IFS=: 00:32:57.103 12:55:16 -- accel/accel.sh@20 -- # read -r var val 00:32:57.103 12:55:16 -- accel/accel.sh@21 -- # val= 00:32:57.103 12:55:16 -- accel/accel.sh@22 -- # case "$var" in 00:32:57.104 12:55:16 -- accel/accel.sh@20 -- # IFS=: 00:32:57.104 12:55:16 -- accel/accel.sh@20 -- # read -r var val 00:32:57.104 12:55:16 -- accel/accel.sh@21 -- # val=decompress 00:32:57.104 12:55:16 -- accel/accel.sh@22 -- # case "$var" in 00:32:57.104 12:55:16 -- accel/accel.sh@24 -- # accel_opc=decompress 00:32:57.104 12:55:16 -- accel/accel.sh@20 -- # IFS=: 00:32:57.104 12:55:16 -- accel/accel.sh@20 -- # read -r var val 00:32:57.104 12:55:16 -- accel/accel.sh@21 -- # val='111250 bytes' 00:32:57.104 12:55:16 -- accel/accel.sh@22 -- # case "$var" in 00:32:57.104 12:55:16 -- accel/accel.sh@20 -- # IFS=: 00:32:57.104 12:55:16 -- accel/accel.sh@20 -- # read -r var val 00:32:57.104 12:55:16 -- accel/accel.sh@21 -- # val= 00:32:57.104 12:55:16 -- accel/accel.sh@22 -- # case "$var" in 00:32:57.104 12:55:16 -- accel/accel.sh@20 -- # IFS=: 00:32:57.104 12:55:16 -- accel/accel.sh@20 -- # read -r var val 00:32:57.104 12:55:16 -- accel/accel.sh@21 -- # val=software 00:32:57.104 12:55:16 -- accel/accel.sh@22 -- # case "$var" in 00:32:57.104 12:55:16 -- accel/accel.sh@23 -- # accel_module=software 00:32:57.104 12:55:16 -- accel/accel.sh@20 -- # IFS=: 00:32:57.104 12:55:16 -- accel/accel.sh@20 -- # read -r var val 00:32:57.104 12:55:16 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:32:57.104 12:55:16 -- accel/accel.sh@22 -- # case "$var" in 00:32:57.104 12:55:16 -- accel/accel.sh@20 -- # IFS=: 00:32:57.104 12:55:16 -- accel/accel.sh@20 -- # read -r var val 00:32:57.104 12:55:16 -- accel/accel.sh@21 -- # val=32 00:32:57.104 12:55:16 -- accel/accel.sh@22 -- # case "$var" in 00:32:57.104 12:55:16 -- accel/accel.sh@20 -- # IFS=: 00:32:57.104 12:55:16 -- accel/accel.sh@20 -- # read -r var val 00:32:57.104 12:55:16 -- accel/accel.sh@21 -- # val=32 00:32:57.104 12:55:16 -- accel/accel.sh@22 -- # case "$var" in 00:32:57.104 12:55:16 -- accel/accel.sh@20 -- # IFS=: 00:32:57.104 12:55:16 -- accel/accel.sh@20 -- # read -r var val 00:32:57.104 12:55:16 -- accel/accel.sh@21 -- # val=1 00:32:57.104 12:55:16 -- accel/accel.sh@22 -- # case "$var" in 00:32:57.104 12:55:16 -- accel/accel.sh@20 -- # IFS=: 00:32:57.104 12:55:16 -- accel/accel.sh@20 -- # read -r var val 00:32:57.104 12:55:16 -- accel/accel.sh@21 -- # val='1 seconds' 00:32:57.104 12:55:16 -- accel/accel.sh@22 -- # case "$var" in 00:32:57.104 12:55:16 -- accel/accel.sh@20 -- # IFS=: 00:32:57.104 12:55:16 -- accel/accel.sh@20 -- # read -r var val 00:32:57.104 12:55:16 -- accel/accel.sh@21 -- # val=Yes 00:32:57.104 12:55:16 -- accel/accel.sh@22 -- # case "$var" in 00:32:57.104 12:55:16 -- accel/accel.sh@20 -- # IFS=: 00:32:57.104 12:55:16 -- accel/accel.sh@20 -- # read -r var val 00:32:57.104 12:55:16 -- accel/accel.sh@21 -- # val= 00:32:57.104 12:55:16 -- accel/accel.sh@22 -- # case "$var" in 00:32:57.104 12:55:16 -- accel/accel.sh@20 -- # IFS=: 00:32:57.104 12:55:16 -- accel/accel.sh@20 -- # read -r var val 00:32:57.104 12:55:16 -- accel/accel.sh@21 -- # val= 00:32:57.104 12:55:16 -- accel/accel.sh@22 -- # case "$var" in 00:32:57.104 12:55:16 -- accel/accel.sh@20 -- # IFS=: 00:32:57.104 12:55:16 -- accel/accel.sh@20 -- # read -r var val 00:32:58.479 12:55:17 -- accel/accel.sh@21 -- # val= 00:32:58.479 12:55:17 -- accel/accel.sh@22 -- # case "$var" in 00:32:58.479 12:55:17 -- accel/accel.sh@20 -- # IFS=: 00:32:58.479 12:55:17 -- accel/accel.sh@20 -- # read -r var val 00:32:58.479 12:55:17 -- accel/accel.sh@21 -- # val= 00:32:58.479 12:55:17 -- accel/accel.sh@22 -- # case "$var" in 00:32:58.479 12:55:17 -- accel/accel.sh@20 -- # IFS=: 00:32:58.479 12:55:17 -- accel/accel.sh@20 -- # read -r var val 00:32:58.479 12:55:17 -- accel/accel.sh@21 -- # val= 00:32:58.479 12:55:17 -- accel/accel.sh@22 -- # case "$var" in 00:32:58.479 12:55:17 -- accel/accel.sh@20 -- # IFS=: 00:32:58.479 12:55:17 -- accel/accel.sh@20 -- # read -r var val 00:32:58.479 12:55:17 -- accel/accel.sh@21 -- # val= 00:32:58.479 12:55:17 -- accel/accel.sh@22 -- # case "$var" in 00:32:58.479 12:55:17 -- accel/accel.sh@20 -- # IFS=: 00:32:58.479 12:55:17 -- accel/accel.sh@20 -- # read -r var val 00:32:58.479 12:55:17 -- accel/accel.sh@21 -- # val= 00:32:58.479 12:55:17 -- accel/accel.sh@22 -- # case "$var" in 00:32:58.479 12:55:17 -- accel/accel.sh@20 -- # IFS=: 00:32:58.479 12:55:17 -- accel/accel.sh@20 -- # read -r var val 00:32:58.479 12:55:17 -- accel/accel.sh@21 -- # val= 00:32:58.479 12:55:17 -- accel/accel.sh@22 -- # case "$var" in 00:32:58.479 12:55:17 -- accel/accel.sh@20 -- # IFS=: 00:32:58.479 12:55:17 -- accel/accel.sh@20 -- # read -r var val 00:32:58.479 12:55:17 -- accel/accel.sh@21 -- # val= 00:32:58.479 12:55:17 -- accel/accel.sh@22 -- # case "$var" in 00:32:58.479 12:55:17 -- accel/accel.sh@20 -- # IFS=: 00:32:58.479 12:55:17 -- accel/accel.sh@20 -- # read -r var val 00:32:58.479 12:55:17 -- accel/accel.sh@21 -- # val= 00:32:58.479 12:55:17 -- accel/accel.sh@22 -- # case "$var" in 00:32:58.479 12:55:17 -- accel/accel.sh@20 -- # IFS=: 00:32:58.479 12:55:17 -- accel/accel.sh@20 -- # read -r var val 00:32:58.479 12:55:17 -- accel/accel.sh@21 -- # val= 00:32:58.479 12:55:17 -- accel/accel.sh@22 -- # case "$var" in 00:32:58.479 12:55:17 -- accel/accel.sh@20 -- # IFS=: 00:32:58.479 12:55:17 -- accel/accel.sh@20 -- # read -r var val 00:32:58.479 12:55:17 -- accel/accel.sh@28 -- # [[ -n software ]] 00:32:58.479 12:55:17 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:32:58.479 12:55:17 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:58.479 00:32:58.479 real 0m3.011s 00:32:58.479 user 0m9.462s 00:32:58.479 sys 0m0.281s 00:32:58.479 12:55:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:58.479 12:55:17 -- common/autotest_common.sh@10 -- # set +x 00:32:58.479 ************************************ 00:32:58.479 END TEST accel_decomp_full_mcore 00:32:58.479 ************************************ 00:32:58.479 12:55:17 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:32:58.479 12:55:17 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:32:58.479 12:55:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:58.479 12:55:17 -- common/autotest_common.sh@10 -- # set +x 00:32:58.479 ************************************ 00:32:58.479 START TEST accel_decomp_mthread 00:32:58.479 ************************************ 00:32:58.479 12:55:17 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:32:58.479 12:55:17 -- accel/accel.sh@16 -- # local accel_opc 00:32:58.479 12:55:17 -- accel/accel.sh@17 -- # local accel_module 00:32:58.479 12:55:17 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:32:58.479 12:55:17 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:32:58.479 12:55:17 -- accel/accel.sh@12 -- # build_accel_config 00:32:58.479 12:55:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:32:58.479 12:55:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:32:58.479 12:55:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:32:58.479 12:55:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:32:58.479 12:55:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:32:58.479 12:55:17 -- accel/accel.sh@41 -- # local IFS=, 00:32:58.479 12:55:17 -- accel/accel.sh@42 -- # jq -r . 00:32:58.479 [2024-07-22 12:55:17.730296] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:32:58.479 [2024-07-22 12:55:17.730431] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71124 ] 00:32:58.479 [2024-07-22 12:55:17.864561] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:58.737 [2024-07-22 12:55:17.960625] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:00.112 12:55:19 -- accel/accel.sh@18 -- # out='Preparing input file... 00:33:00.112 00:33:00.112 SPDK Configuration: 00:33:00.112 Core mask: 0x1 00:33:00.112 00:33:00.112 Accel Perf Configuration: 00:33:00.112 Workload Type: decompress 00:33:00.112 Transfer size: 4096 bytes 00:33:00.112 Vector count 1 00:33:00.112 Module: software 00:33:00.112 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:33:00.112 Queue depth: 32 00:33:00.112 Allocate depth: 32 00:33:00.112 # threads/core: 2 00:33:00.112 Run time: 1 seconds 00:33:00.112 Verify: Yes 00:33:00.112 00:33:00.112 Running for 1 seconds... 00:33:00.112 00:33:00.112 Core,Thread Transfers Bandwidth Failed Miscompares 00:33:00.112 ------------------------------------------------------------------------------------ 00:33:00.112 0,1 33696/s 62 MiB/s 0 0 00:33:00.112 0,0 33536/s 61 MiB/s 0 0 00:33:00.112 ==================================================================================== 00:33:00.112 Total 67232/s 262 MiB/s 0 0' 00:33:00.112 12:55:19 -- accel/accel.sh@20 -- # IFS=: 00:33:00.112 12:55:19 -- accel/accel.sh@20 -- # read -r var val 00:33:00.112 12:55:19 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:33:00.112 12:55:19 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:33:00.112 12:55:19 -- accel/accel.sh@12 -- # build_accel_config 00:33:00.112 12:55:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:33:00.112 12:55:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:33:00.112 12:55:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:33:00.113 12:55:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:33:00.113 12:55:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:33:00.113 12:55:19 -- accel/accel.sh@41 -- # local IFS=, 00:33:00.113 12:55:19 -- accel/accel.sh@42 -- # jq -r . 00:33:00.113 [2024-07-22 12:55:19.206357] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:33:00.113 [2024-07-22 12:55:19.206445] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71138 ] 00:33:00.113 [2024-07-22 12:55:19.339349] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:00.113 [2024-07-22 12:55:19.429647] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:00.113 12:55:19 -- accel/accel.sh@21 -- # val= 00:33:00.113 12:55:19 -- accel/accel.sh@22 -- # case "$var" in 00:33:00.113 12:55:19 -- accel/accel.sh@20 -- # IFS=: 00:33:00.113 12:55:19 -- accel/accel.sh@20 -- # read -r var val 00:33:00.113 12:55:19 -- accel/accel.sh@21 -- # val= 00:33:00.113 12:55:19 -- accel/accel.sh@22 -- # case "$var" in 00:33:00.113 12:55:19 -- accel/accel.sh@20 -- # IFS=: 00:33:00.113 12:55:19 -- accel/accel.sh@20 -- # read -r var val 00:33:00.113 12:55:19 -- accel/accel.sh@21 -- # val= 00:33:00.113 12:55:19 -- accel/accel.sh@22 -- # case "$var" in 00:33:00.113 12:55:19 -- accel/accel.sh@20 -- # IFS=: 00:33:00.113 12:55:19 -- accel/accel.sh@20 -- # read -r var val 00:33:00.113 12:55:19 -- accel/accel.sh@21 -- # val=0x1 00:33:00.113 12:55:19 -- accel/accel.sh@22 -- # case "$var" in 00:33:00.113 12:55:19 -- accel/accel.sh@20 -- # IFS=: 00:33:00.113 12:55:19 -- accel/accel.sh@20 -- # read -r var val 00:33:00.113 12:55:19 -- accel/accel.sh@21 -- # val= 00:33:00.113 12:55:19 -- accel/accel.sh@22 -- # case "$var" in 00:33:00.113 12:55:19 -- accel/accel.sh@20 -- # IFS=: 00:33:00.113 12:55:19 -- accel/accel.sh@20 -- # read -r var val 00:33:00.113 12:55:19 -- accel/accel.sh@21 -- # val= 00:33:00.113 12:55:19 -- accel/accel.sh@22 -- # case "$var" in 00:33:00.113 12:55:19 -- accel/accel.sh@20 -- # IFS=: 00:33:00.113 12:55:19 -- accel/accel.sh@20 -- # read -r var val 00:33:00.113 12:55:19 -- accel/accel.sh@21 -- # val=decompress 00:33:00.113 12:55:19 -- accel/accel.sh@22 -- # case "$var" in 00:33:00.113 12:55:19 -- accel/accel.sh@24 -- # accel_opc=decompress 00:33:00.113 12:55:19 -- accel/accel.sh@20 -- # IFS=: 00:33:00.113 12:55:19 -- accel/accel.sh@20 -- # read -r var val 00:33:00.113 12:55:19 -- accel/accel.sh@21 -- # val='4096 bytes' 00:33:00.113 12:55:19 -- accel/accel.sh@22 -- # case "$var" in 00:33:00.113 12:55:19 -- accel/accel.sh@20 -- # IFS=: 00:33:00.113 12:55:19 -- accel/accel.sh@20 -- # read -r var val 00:33:00.113 12:55:19 -- accel/accel.sh@21 -- # val= 00:33:00.113 12:55:19 -- accel/accel.sh@22 -- # case "$var" in 00:33:00.113 12:55:19 -- accel/accel.sh@20 -- # IFS=: 00:33:00.113 12:55:19 -- accel/accel.sh@20 -- # read -r var val 00:33:00.113 12:55:19 -- accel/accel.sh@21 -- # val=software 00:33:00.113 12:55:19 -- accel/accel.sh@22 -- # case "$var" in 00:33:00.113 12:55:19 -- accel/accel.sh@23 -- # accel_module=software 00:33:00.113 12:55:19 -- accel/accel.sh@20 -- # IFS=: 00:33:00.113 12:55:19 -- accel/accel.sh@20 -- # read -r var val 00:33:00.113 12:55:19 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:33:00.113 12:55:19 -- accel/accel.sh@22 -- # case "$var" in 00:33:00.113 12:55:19 -- accel/accel.sh@20 -- # IFS=: 00:33:00.113 12:55:19 -- accel/accel.sh@20 -- # read -r var val 00:33:00.113 12:55:19 -- accel/accel.sh@21 -- # val=32 00:33:00.113 12:55:19 -- accel/accel.sh@22 -- # case "$var" in 00:33:00.113 12:55:19 -- accel/accel.sh@20 -- # IFS=: 00:33:00.113 12:55:19 -- accel/accel.sh@20 -- # read -r var val 00:33:00.113 12:55:19 -- accel/accel.sh@21 -- # val=32 00:33:00.113 12:55:19 -- accel/accel.sh@22 -- # case "$var" in 00:33:00.113 12:55:19 -- accel/accel.sh@20 -- # IFS=: 00:33:00.113 12:55:19 -- accel/accel.sh@20 -- # read -r var val 00:33:00.113 12:55:19 -- accel/accel.sh@21 -- # val=2 00:33:00.113 12:55:19 -- accel/accel.sh@22 -- # case "$var" in 00:33:00.113 12:55:19 -- accel/accel.sh@20 -- # IFS=: 00:33:00.113 12:55:19 -- accel/accel.sh@20 -- # read -r var val 00:33:00.113 12:55:19 -- accel/accel.sh@21 -- # val='1 seconds' 00:33:00.113 12:55:19 -- accel/accel.sh@22 -- # case "$var" in 00:33:00.113 12:55:19 -- accel/accel.sh@20 -- # IFS=: 00:33:00.113 12:55:19 -- accel/accel.sh@20 -- # read -r var val 00:33:00.113 12:55:19 -- accel/accel.sh@21 -- # val=Yes 00:33:00.113 12:55:19 -- accel/accel.sh@22 -- # case "$var" in 00:33:00.113 12:55:19 -- accel/accel.sh@20 -- # IFS=: 00:33:00.113 12:55:19 -- accel/accel.sh@20 -- # read -r var val 00:33:00.113 12:55:19 -- accel/accel.sh@21 -- # val= 00:33:00.113 12:55:19 -- accel/accel.sh@22 -- # case "$var" in 00:33:00.113 12:55:19 -- accel/accel.sh@20 -- # IFS=: 00:33:00.113 12:55:19 -- accel/accel.sh@20 -- # read -r var val 00:33:00.113 12:55:19 -- accel/accel.sh@21 -- # val= 00:33:00.113 12:55:19 -- accel/accel.sh@22 -- # case "$var" in 00:33:00.113 12:55:19 -- accel/accel.sh@20 -- # IFS=: 00:33:00.113 12:55:19 -- accel/accel.sh@20 -- # read -r var val 00:33:01.489 12:55:20 -- accel/accel.sh@21 -- # val= 00:33:01.490 12:55:20 -- accel/accel.sh@22 -- # case "$var" in 00:33:01.490 12:55:20 -- accel/accel.sh@20 -- # IFS=: 00:33:01.490 12:55:20 -- accel/accel.sh@20 -- # read -r var val 00:33:01.490 12:55:20 -- accel/accel.sh@21 -- # val= 00:33:01.490 12:55:20 -- accel/accel.sh@22 -- # case "$var" in 00:33:01.490 12:55:20 -- accel/accel.sh@20 -- # IFS=: 00:33:01.490 12:55:20 -- accel/accel.sh@20 -- # read -r var val 00:33:01.490 12:55:20 -- accel/accel.sh@21 -- # val= 00:33:01.490 12:55:20 -- accel/accel.sh@22 -- # case "$var" in 00:33:01.490 12:55:20 -- accel/accel.sh@20 -- # IFS=: 00:33:01.490 12:55:20 -- accel/accel.sh@20 -- # read -r var val 00:33:01.490 12:55:20 -- accel/accel.sh@21 -- # val= 00:33:01.490 12:55:20 -- accel/accel.sh@22 -- # case "$var" in 00:33:01.490 12:55:20 -- accel/accel.sh@20 -- # IFS=: 00:33:01.490 12:55:20 -- accel/accel.sh@20 -- # read -r var val 00:33:01.490 12:55:20 -- accel/accel.sh@21 -- # val= 00:33:01.490 12:55:20 -- accel/accel.sh@22 -- # case "$var" in 00:33:01.490 12:55:20 -- accel/accel.sh@20 -- # IFS=: 00:33:01.490 12:55:20 -- accel/accel.sh@20 -- # read -r var val 00:33:01.490 12:55:20 -- accel/accel.sh@21 -- # val= 00:33:01.490 12:55:20 -- accel/accel.sh@22 -- # case "$var" in 00:33:01.490 12:55:20 -- accel/accel.sh@20 -- # IFS=: 00:33:01.490 12:55:20 -- accel/accel.sh@20 -- # read -r var val 00:33:01.490 12:55:20 -- accel/accel.sh@21 -- # val= 00:33:01.490 12:55:20 -- accel/accel.sh@22 -- # case "$var" in 00:33:01.490 12:55:20 -- accel/accel.sh@20 -- # IFS=: 00:33:01.490 12:55:20 -- accel/accel.sh@20 -- # read -r var val 00:33:01.490 12:55:20 -- accel/accel.sh@28 -- # [[ -n software ]] 00:33:01.490 12:55:20 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:33:01.490 12:55:20 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:01.490 00:33:01.490 real 0m2.962s 00:33:01.490 user 0m2.530s 00:33:01.490 sys 0m0.231s 00:33:01.490 12:55:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:01.490 12:55:20 -- common/autotest_common.sh@10 -- # set +x 00:33:01.490 ************************************ 00:33:01.490 END TEST accel_decomp_mthread 00:33:01.490 ************************************ 00:33:01.490 12:55:20 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:33:01.490 12:55:20 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:33:01.490 12:55:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:01.490 12:55:20 -- common/autotest_common.sh@10 -- # set +x 00:33:01.490 ************************************ 00:33:01.490 START TEST accel_deomp_full_mthread 00:33:01.490 ************************************ 00:33:01.490 12:55:20 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:33:01.490 12:55:20 -- accel/accel.sh@16 -- # local accel_opc 00:33:01.490 12:55:20 -- accel/accel.sh@17 -- # local accel_module 00:33:01.490 12:55:20 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:33:01.490 12:55:20 -- accel/accel.sh@12 -- # build_accel_config 00:33:01.490 12:55:20 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:33:01.490 12:55:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:33:01.490 12:55:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:33:01.490 12:55:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:33:01.490 12:55:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:33:01.490 12:55:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:33:01.490 12:55:20 -- accel/accel.sh@41 -- # local IFS=, 00:33:01.490 12:55:20 -- accel/accel.sh@42 -- # jq -r . 00:33:01.490 [2024-07-22 12:55:20.753407] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:33:01.490 [2024-07-22 12:55:20.753506] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71180 ] 00:33:01.490 [2024-07-22 12:55:20.892612] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:01.749 [2024-07-22 12:55:20.989976] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:03.126 12:55:22 -- accel/accel.sh@18 -- # out='Preparing input file... 00:33:03.126 00:33:03.126 SPDK Configuration: 00:33:03.126 Core mask: 0x1 00:33:03.126 00:33:03.126 Accel Perf Configuration: 00:33:03.126 Workload Type: decompress 00:33:03.126 Transfer size: 111250 bytes 00:33:03.126 Vector count 1 00:33:03.126 Module: software 00:33:03.126 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:33:03.126 Queue depth: 32 00:33:03.126 Allocate depth: 32 00:33:03.126 # threads/core: 2 00:33:03.126 Run time: 1 seconds 00:33:03.126 Verify: Yes 00:33:03.126 00:33:03.126 Running for 1 seconds... 00:33:03.126 00:33:03.126 Core,Thread Transfers Bandwidth Failed Miscompares 00:33:03.126 ------------------------------------------------------------------------------------ 00:33:03.126 0,1 2240/s 92 MiB/s 0 0 00:33:03.126 0,0 2208/s 91 MiB/s 0 0 00:33:03.126 ==================================================================================== 00:33:03.126 Total 4448/s 471 MiB/s 0 0' 00:33:03.126 12:55:22 -- accel/accel.sh@20 -- # IFS=: 00:33:03.126 12:55:22 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:33:03.126 12:55:22 -- accel/accel.sh@20 -- # read -r var val 00:33:03.126 12:55:22 -- accel/accel.sh@12 -- # build_accel_config 00:33:03.126 12:55:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:33:03.126 12:55:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:33:03.126 12:55:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:33:03.126 12:55:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:33:03.126 12:55:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:33:03.126 12:55:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:33:03.126 12:55:22 -- accel/accel.sh@41 -- # local IFS=, 00:33:03.126 12:55:22 -- accel/accel.sh@42 -- # jq -r . 00:33:03.126 [2024-07-22 12:55:22.260155] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:33:03.127 [2024-07-22 12:55:22.260715] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71194 ] 00:33:03.127 [2024-07-22 12:55:22.396363] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:03.127 [2024-07-22 12:55:22.492957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:03.385 12:55:22 -- accel/accel.sh@21 -- # val= 00:33:03.385 12:55:22 -- accel/accel.sh@22 -- # case "$var" in 00:33:03.385 12:55:22 -- accel/accel.sh@20 -- # IFS=: 00:33:03.385 12:55:22 -- accel/accel.sh@20 -- # read -r var val 00:33:03.385 12:55:22 -- accel/accel.sh@21 -- # val= 00:33:03.385 12:55:22 -- accel/accel.sh@22 -- # case "$var" in 00:33:03.385 12:55:22 -- accel/accel.sh@20 -- # IFS=: 00:33:03.385 12:55:22 -- accel/accel.sh@20 -- # read -r var val 00:33:03.385 12:55:22 -- accel/accel.sh@21 -- # val= 00:33:03.385 12:55:22 -- accel/accel.sh@22 -- # case "$var" in 00:33:03.385 12:55:22 -- accel/accel.sh@20 -- # IFS=: 00:33:03.385 12:55:22 -- accel/accel.sh@20 -- # read -r var val 00:33:03.385 12:55:22 -- accel/accel.sh@21 -- # val=0x1 00:33:03.385 12:55:22 -- accel/accel.sh@22 -- # case "$var" in 00:33:03.385 12:55:22 -- accel/accel.sh@20 -- # IFS=: 00:33:03.385 12:55:22 -- accel/accel.sh@20 -- # read -r var val 00:33:03.385 12:55:22 -- accel/accel.sh@21 -- # val= 00:33:03.385 12:55:22 -- accel/accel.sh@22 -- # case "$var" in 00:33:03.385 12:55:22 -- accel/accel.sh@20 -- # IFS=: 00:33:03.385 12:55:22 -- accel/accel.sh@20 -- # read -r var val 00:33:03.385 12:55:22 -- accel/accel.sh@21 -- # val= 00:33:03.385 12:55:22 -- accel/accel.sh@22 -- # case "$var" in 00:33:03.385 12:55:22 -- accel/accel.sh@20 -- # IFS=: 00:33:03.385 12:55:22 -- accel/accel.sh@20 -- # read -r var val 00:33:03.385 12:55:22 -- accel/accel.sh@21 -- # val=decompress 00:33:03.385 12:55:22 -- accel/accel.sh@22 -- # case "$var" in 00:33:03.385 12:55:22 -- accel/accel.sh@24 -- # accel_opc=decompress 00:33:03.385 12:55:22 -- accel/accel.sh@20 -- # IFS=: 00:33:03.385 12:55:22 -- accel/accel.sh@20 -- # read -r var val 00:33:03.385 12:55:22 -- accel/accel.sh@21 -- # val='111250 bytes' 00:33:03.385 12:55:22 -- accel/accel.sh@22 -- # case "$var" in 00:33:03.385 12:55:22 -- accel/accel.sh@20 -- # IFS=: 00:33:03.385 12:55:22 -- accel/accel.sh@20 -- # read -r var val 00:33:03.385 12:55:22 -- accel/accel.sh@21 -- # val= 00:33:03.385 12:55:22 -- accel/accel.sh@22 -- # case "$var" in 00:33:03.385 12:55:22 -- accel/accel.sh@20 -- # IFS=: 00:33:03.385 12:55:22 -- accel/accel.sh@20 -- # read -r var val 00:33:03.385 12:55:22 -- accel/accel.sh@21 -- # val=software 00:33:03.385 12:55:22 -- accel/accel.sh@22 -- # case "$var" in 00:33:03.385 12:55:22 -- accel/accel.sh@23 -- # accel_module=software 00:33:03.385 12:55:22 -- accel/accel.sh@20 -- # IFS=: 00:33:03.385 12:55:22 -- accel/accel.sh@20 -- # read -r var val 00:33:03.385 12:55:22 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:33:03.385 12:55:22 -- accel/accel.sh@22 -- # case "$var" in 00:33:03.385 12:55:22 -- accel/accel.sh@20 -- # IFS=: 00:33:03.385 12:55:22 -- accel/accel.sh@20 -- # read -r var val 00:33:03.385 12:55:22 -- accel/accel.sh@21 -- # val=32 00:33:03.385 12:55:22 -- accel/accel.sh@22 -- # case "$var" in 00:33:03.385 12:55:22 -- accel/accel.sh@20 -- # IFS=: 00:33:03.385 12:55:22 -- accel/accel.sh@20 -- # read -r var val 00:33:03.385 12:55:22 -- accel/accel.sh@21 -- # val=32 00:33:03.385 12:55:22 -- accel/accel.sh@22 -- # case "$var" in 00:33:03.385 12:55:22 -- accel/accel.sh@20 -- # IFS=: 00:33:03.385 12:55:22 -- accel/accel.sh@20 -- # read -r var val 00:33:03.385 12:55:22 -- accel/accel.sh@21 -- # val=2 00:33:03.385 12:55:22 -- accel/accel.sh@22 -- # case "$var" in 00:33:03.385 12:55:22 -- accel/accel.sh@20 -- # IFS=: 00:33:03.385 12:55:22 -- accel/accel.sh@20 -- # read -r var val 00:33:03.385 12:55:22 -- accel/accel.sh@21 -- # val='1 seconds' 00:33:03.385 12:55:22 -- accel/accel.sh@22 -- # case "$var" in 00:33:03.385 12:55:22 -- accel/accel.sh@20 -- # IFS=: 00:33:03.385 12:55:22 -- accel/accel.sh@20 -- # read -r var val 00:33:03.385 12:55:22 -- accel/accel.sh@21 -- # val=Yes 00:33:03.385 12:55:22 -- accel/accel.sh@22 -- # case "$var" in 00:33:03.385 12:55:22 -- accel/accel.sh@20 -- # IFS=: 00:33:03.385 12:55:22 -- accel/accel.sh@20 -- # read -r var val 00:33:03.385 12:55:22 -- accel/accel.sh@21 -- # val= 00:33:03.385 12:55:22 -- accel/accel.sh@22 -- # case "$var" in 00:33:03.385 12:55:22 -- accel/accel.sh@20 -- # IFS=: 00:33:03.385 12:55:22 -- accel/accel.sh@20 -- # read -r var val 00:33:03.385 12:55:22 -- accel/accel.sh@21 -- # val= 00:33:03.385 12:55:22 -- accel/accel.sh@22 -- # case "$var" in 00:33:03.385 12:55:22 -- accel/accel.sh@20 -- # IFS=: 00:33:03.385 12:55:22 -- accel/accel.sh@20 -- # read -r var val 00:33:04.320 12:55:23 -- accel/accel.sh@21 -- # val= 00:33:04.320 12:55:23 -- accel/accel.sh@22 -- # case "$var" in 00:33:04.320 12:55:23 -- accel/accel.sh@20 -- # IFS=: 00:33:04.320 12:55:23 -- accel/accel.sh@20 -- # read -r var val 00:33:04.320 12:55:23 -- accel/accel.sh@21 -- # val= 00:33:04.320 12:55:23 -- accel/accel.sh@22 -- # case "$var" in 00:33:04.320 12:55:23 -- accel/accel.sh@20 -- # IFS=: 00:33:04.320 12:55:23 -- accel/accel.sh@20 -- # read -r var val 00:33:04.320 12:55:23 -- accel/accel.sh@21 -- # val= 00:33:04.320 12:55:23 -- accel/accel.sh@22 -- # case "$var" in 00:33:04.320 12:55:23 -- accel/accel.sh@20 -- # IFS=: 00:33:04.320 12:55:23 -- accel/accel.sh@20 -- # read -r var val 00:33:04.320 12:55:23 -- accel/accel.sh@21 -- # val= 00:33:04.320 12:55:23 -- accel/accel.sh@22 -- # case "$var" in 00:33:04.320 12:55:23 -- accel/accel.sh@20 -- # IFS=: 00:33:04.320 12:55:23 -- accel/accel.sh@20 -- # read -r var val 00:33:04.320 12:55:23 -- accel/accel.sh@21 -- # val= 00:33:04.320 12:55:23 -- accel/accel.sh@22 -- # case "$var" in 00:33:04.320 12:55:23 -- accel/accel.sh@20 -- # IFS=: 00:33:04.320 12:55:23 -- accel/accel.sh@20 -- # read -r var val 00:33:04.320 12:55:23 -- accel/accel.sh@21 -- # val= 00:33:04.320 12:55:23 -- accel/accel.sh@22 -- # case "$var" in 00:33:04.320 12:55:23 -- accel/accel.sh@20 -- # IFS=: 00:33:04.321 12:55:23 -- accel/accel.sh@20 -- # read -r var val 00:33:04.321 12:55:23 -- accel/accel.sh@21 -- # val= 00:33:04.321 12:55:23 -- accel/accel.sh@22 -- # case "$var" in 00:33:04.321 12:55:23 -- accel/accel.sh@20 -- # IFS=: 00:33:04.321 12:55:23 -- accel/accel.sh@20 -- # read -r var val 00:33:04.321 12:55:23 -- accel/accel.sh@28 -- # [[ -n software ]] 00:33:04.321 ************************************ 00:33:04.321 END TEST accel_deomp_full_mthread 00:33:04.321 ************************************ 00:33:04.321 12:55:23 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:33:04.321 12:55:23 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:04.321 00:33:04.321 real 0m3.011s 00:33:04.321 user 0m2.574s 00:33:04.321 sys 0m0.234s 00:33:04.321 12:55:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:04.321 12:55:23 -- common/autotest_common.sh@10 -- # set +x 00:33:04.579 12:55:23 -- accel/accel.sh@116 -- # [[ n == y ]] 00:33:04.579 12:55:23 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:33:04.579 12:55:23 -- accel/accel.sh@129 -- # build_accel_config 00:33:04.579 12:55:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:33:04.579 12:55:23 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:33:04.579 12:55:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:33:04.579 12:55:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:04.579 12:55:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:33:04.579 12:55:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:33:04.579 12:55:23 -- common/autotest_common.sh@10 -- # set +x 00:33:04.579 12:55:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:33:04.579 12:55:23 -- accel/accel.sh@41 -- # local IFS=, 00:33:04.579 12:55:23 -- accel/accel.sh@42 -- # jq -r . 00:33:04.579 ************************************ 00:33:04.579 START TEST accel_dif_functional_tests 00:33:04.579 ************************************ 00:33:04.580 12:55:23 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:33:04.580 [2024-07-22 12:55:23.836979] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:33:04.580 [2024-07-22 12:55:23.837081] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71234 ] 00:33:04.580 [2024-07-22 12:55:23.976496] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:04.838 [2024-07-22 12:55:24.073589] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:04.838 [2024-07-22 12:55:24.073675] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:33:04.838 [2024-07-22 12:55:24.073681] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:04.838 00:33:04.838 00:33:04.838 CUnit - A unit testing framework for C - Version 2.1-3 00:33:04.838 http://cunit.sourceforge.net/ 00:33:04.838 00:33:04.838 00:33:04.838 Suite: accel_dif 00:33:04.838 Test: verify: DIF generated, GUARD check ...passed 00:33:04.838 Test: verify: DIF generated, APPTAG check ...passed 00:33:04.838 Test: verify: DIF generated, REFTAG check ...passed 00:33:04.838 Test: verify: DIF not generated, GUARD check ...passed 00:33:04.838 Test: verify: DIF not generated, APPTAG check ...[2024-07-22 12:55:24.163165] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:33:04.838 [2024-07-22 12:55:24.163268] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:33:04.838 [2024-07-22 12:55:24.163306] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:33:04.838 passed 00:33:04.838 Test: verify: DIF not generated, REFTAG check ...passed 00:33:04.838 Test: verify: APPTAG correct, APPTAG check ...[2024-07-22 12:55:24.163334] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:33:04.838 [2024-07-22 12:55:24.163359] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:33:04.838 [2024-07-22 12:55:24.163457] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:33:04.838 passed 00:33:04.838 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-22 12:55:24.163525] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:33:04.838 passed 00:33:04.838 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:33:04.838 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:33:04.838 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:33:04.838 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:33:04.839 Test: generate copy: DIF generated, GUARD check ...[2024-07-22 12:55:24.163897] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:33:04.839 passed 00:33:04.839 Test: generate copy: DIF generated, APTTAG check ...passed 00:33:04.839 Test: generate copy: DIF generated, REFTAG check ...passed 00:33:04.839 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:33:04.839 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:33:04.839 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:33:04.839 Test: generate copy: iovecs-len validate ...[2024-07-22 12:55:24.164395] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:33:04.839 passed 00:33:04.839 Test: generate copy: buffer alignment validate ...passed 00:33:04.839 00:33:04.839 Run Summary: Type Total Ran Passed Failed Inactive 00:33:04.839 suites 1 1 n/a 0 0 00:33:04.839 tests 20 20 20 0 0 00:33:04.839 asserts 204 204 204 0 n/a 00:33:04.839 00:33:04.839 Elapsed time = 0.004 seconds 00:33:05.097 ************************************ 00:33:05.097 END TEST accel_dif_functional_tests 00:33:05.097 ************************************ 00:33:05.097 00:33:05.097 real 0m0.581s 00:33:05.097 user 0m0.779s 00:33:05.097 sys 0m0.151s 00:33:05.097 12:55:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:05.097 12:55:24 -- common/autotest_common.sh@10 -- # set +x 00:33:05.097 00:33:05.097 real 1m3.603s 00:33:05.097 user 1m7.735s 00:33:05.097 sys 0m6.370s 00:33:05.097 12:55:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:05.097 12:55:24 -- common/autotest_common.sh@10 -- # set +x 00:33:05.097 ************************************ 00:33:05.097 END TEST accel 00:33:05.097 ************************************ 00:33:05.097 12:55:24 -- spdk/autotest.sh@190 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:33:05.097 12:55:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:33:05.097 12:55:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:05.097 12:55:24 -- common/autotest_common.sh@10 -- # set +x 00:33:05.097 ************************************ 00:33:05.097 START TEST accel_rpc 00:33:05.097 ************************************ 00:33:05.097 12:55:24 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:33:05.097 * Looking for test storage... 00:33:05.356 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:33:05.356 12:55:24 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:33:05.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:05.356 12:55:24 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=71293 00:33:05.356 12:55:24 -- accel/accel_rpc.sh@15 -- # waitforlisten 71293 00:33:05.356 12:55:24 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:33:05.356 12:55:24 -- common/autotest_common.sh@819 -- # '[' -z 71293 ']' 00:33:05.356 12:55:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:05.356 12:55:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:33:05.356 12:55:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:05.356 12:55:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:33:05.356 12:55:24 -- common/autotest_common.sh@10 -- # set +x 00:33:05.356 [2024-07-22 12:55:24.587284] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:33:05.356 [2024-07-22 12:55:24.587387] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71293 ] 00:33:05.356 [2024-07-22 12:55:24.727217] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:05.614 [2024-07-22 12:55:24.827844] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:33:05.614 [2024-07-22 12:55:24.828030] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:06.181 12:55:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:33:06.181 12:55:25 -- common/autotest_common.sh@852 -- # return 0 00:33:06.181 12:55:25 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:33:06.181 12:55:25 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:33:06.181 12:55:25 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:33:06.181 12:55:25 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:33:06.181 12:55:25 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:33:06.181 12:55:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:33:06.181 12:55:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:06.181 12:55:25 -- common/autotest_common.sh@10 -- # set +x 00:33:06.181 ************************************ 00:33:06.181 START TEST accel_assign_opcode 00:33:06.181 ************************************ 00:33:06.181 12:55:25 -- common/autotest_common.sh@1104 -- # accel_assign_opcode_test_suite 00:33:06.181 12:55:25 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:33:06.181 12:55:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:06.181 12:55:25 -- common/autotest_common.sh@10 -- # set +x 00:33:06.181 [2024-07-22 12:55:25.564605] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:33:06.181 12:55:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:06.181 12:55:25 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:33:06.181 12:55:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:06.181 12:55:25 -- common/autotest_common.sh@10 -- # set +x 00:33:06.181 [2024-07-22 12:55:25.576602] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:33:06.181 12:55:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:06.181 12:55:25 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:33:06.181 12:55:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:06.181 12:55:25 -- common/autotest_common.sh@10 -- # set +x 00:33:06.439 12:55:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:06.439 12:55:25 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:33:06.439 12:55:25 -- accel/accel_rpc.sh@42 -- # grep software 00:33:06.439 12:55:25 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:33:06.439 12:55:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:06.439 12:55:25 -- common/autotest_common.sh@10 -- # set +x 00:33:06.439 12:55:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:06.439 software 00:33:06.439 ************************************ 00:33:06.439 END TEST accel_assign_opcode 00:33:06.439 ************************************ 00:33:06.439 00:33:06.439 real 0m0.300s 00:33:06.439 user 0m0.059s 00:33:06.439 sys 0m0.007s 00:33:06.439 12:55:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:06.439 12:55:25 -- common/autotest_common.sh@10 -- # set +x 00:33:06.698 12:55:25 -- accel/accel_rpc.sh@55 -- # killprocess 71293 00:33:06.698 12:55:25 -- common/autotest_common.sh@926 -- # '[' -z 71293 ']' 00:33:06.698 12:55:25 -- common/autotest_common.sh@930 -- # kill -0 71293 00:33:06.698 12:55:25 -- common/autotest_common.sh@931 -- # uname 00:33:06.698 12:55:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:33:06.698 12:55:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 71293 00:33:06.698 killing process with pid 71293 00:33:06.698 12:55:25 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:33:06.698 12:55:25 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:33:06.698 12:55:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 71293' 00:33:06.698 12:55:25 -- common/autotest_common.sh@945 -- # kill 71293 00:33:06.698 12:55:25 -- common/autotest_common.sh@950 -- # wait 71293 00:33:07.012 ************************************ 00:33:07.012 END TEST accel_rpc 00:33:07.012 ************************************ 00:33:07.012 00:33:07.012 real 0m1.828s 00:33:07.012 user 0m1.925s 00:33:07.012 sys 0m0.431s 00:33:07.012 12:55:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:07.012 12:55:26 -- common/autotest_common.sh@10 -- # set +x 00:33:07.012 12:55:26 -- spdk/autotest.sh@191 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:33:07.012 12:55:26 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:33:07.012 12:55:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:07.012 12:55:26 -- common/autotest_common.sh@10 -- # set +x 00:33:07.012 ************************************ 00:33:07.012 START TEST app_cmdline 00:33:07.012 ************************************ 00:33:07.012 12:55:26 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:33:07.012 * Looking for test storage... 00:33:07.012 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:33:07.012 12:55:26 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:33:07.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:07.012 12:55:26 -- app/cmdline.sh@17 -- # spdk_tgt_pid=71403 00:33:07.012 12:55:26 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:33:07.012 12:55:26 -- app/cmdline.sh@18 -- # waitforlisten 71403 00:33:07.012 12:55:26 -- common/autotest_common.sh@819 -- # '[' -z 71403 ']' 00:33:07.012 12:55:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:07.012 12:55:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:33:07.012 12:55:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:07.012 12:55:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:33:07.012 12:55:26 -- common/autotest_common.sh@10 -- # set +x 00:33:07.286 [2024-07-22 12:55:26.469299] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:33:07.286 [2024-07-22 12:55:26.469399] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71403 ] 00:33:07.286 [2024-07-22 12:55:26.604868] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:07.286 [2024-07-22 12:55:26.701619] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:33:07.286 [2024-07-22 12:55:26.701789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:08.219 12:55:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:33:08.219 12:55:27 -- common/autotest_common.sh@852 -- # return 0 00:33:08.219 12:55:27 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:33:08.477 { 00:33:08.477 "fields": { 00:33:08.477 "commit": "4b94202c6", 00:33:08.477 "major": 24, 00:33:08.477 "minor": 1, 00:33:08.477 "patch": 1, 00:33:08.477 "suffix": "-pre" 00:33:08.477 }, 00:33:08.477 "version": "SPDK v24.01.1-pre git sha1 4b94202c6" 00:33:08.477 } 00:33:08.477 12:55:27 -- app/cmdline.sh@22 -- # expected_methods=() 00:33:08.477 12:55:27 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:33:08.477 12:55:27 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:33:08.477 12:55:27 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:33:08.477 12:55:27 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:33:08.477 12:55:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:08.477 12:55:27 -- app/cmdline.sh@26 -- # sort 00:33:08.477 12:55:27 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:33:08.477 12:55:27 -- common/autotest_common.sh@10 -- # set +x 00:33:08.477 12:55:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:08.477 12:55:27 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:33:08.477 12:55:27 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:33:08.477 12:55:27 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:33:08.477 12:55:27 -- common/autotest_common.sh@640 -- # local es=0 00:33:08.477 12:55:27 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:33:08.477 12:55:27 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:08.477 12:55:27 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:33:08.477 12:55:27 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:08.477 12:55:27 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:33:08.477 12:55:27 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:08.477 12:55:27 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:33:08.477 12:55:27 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:08.477 12:55:27 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:33:08.477 12:55:27 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:33:08.736 2024/07/22 12:55:27 error on JSON-RPC call, method: env_dpdk_get_mem_stats, params: map[], err: error received for env_dpdk_get_mem_stats method, err: Code=-32601 Msg=Method not found 00:33:08.736 request: 00:33:08.736 { 00:33:08.736 "method": "env_dpdk_get_mem_stats", 00:33:08.736 "params": {} 00:33:08.736 } 00:33:08.736 Got JSON-RPC error response 00:33:08.736 GoRPCClient: error on JSON-RPC call 00:33:08.736 12:55:28 -- common/autotest_common.sh@643 -- # es=1 00:33:08.736 12:55:28 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:33:08.736 12:55:28 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:33:08.736 12:55:28 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:33:08.736 12:55:28 -- app/cmdline.sh@1 -- # killprocess 71403 00:33:08.736 12:55:28 -- common/autotest_common.sh@926 -- # '[' -z 71403 ']' 00:33:08.736 12:55:28 -- common/autotest_common.sh@930 -- # kill -0 71403 00:33:08.736 12:55:28 -- common/autotest_common.sh@931 -- # uname 00:33:08.736 12:55:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:33:08.736 12:55:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 71403 00:33:08.736 12:55:28 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:33:08.736 12:55:28 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:33:08.736 12:55:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 71403' 00:33:08.736 killing process with pid 71403 00:33:08.736 12:55:28 -- common/autotest_common.sh@945 -- # kill 71403 00:33:08.736 12:55:28 -- common/autotest_common.sh@950 -- # wait 71403 00:33:08.994 00:33:08.994 real 0m2.075s 00:33:08.994 user 0m2.605s 00:33:08.994 sys 0m0.470s 00:33:08.994 12:55:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:08.994 ************************************ 00:33:08.994 END TEST app_cmdline 00:33:08.994 ************************************ 00:33:08.994 12:55:28 -- common/autotest_common.sh@10 -- # set +x 00:33:09.253 12:55:28 -- spdk/autotest.sh@192 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:33:09.253 12:55:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:33:09.253 12:55:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:09.253 12:55:28 -- common/autotest_common.sh@10 -- # set +x 00:33:09.253 ************************************ 00:33:09.253 START TEST version 00:33:09.253 ************************************ 00:33:09.253 12:55:28 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:33:09.253 * Looking for test storage... 00:33:09.253 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:33:09.253 12:55:28 -- app/version.sh@17 -- # get_header_version major 00:33:09.253 12:55:28 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:33:09.253 12:55:28 -- app/version.sh@14 -- # tr -d '"' 00:33:09.253 12:55:28 -- app/version.sh@14 -- # cut -f2 00:33:09.253 12:55:28 -- app/version.sh@17 -- # major=24 00:33:09.253 12:55:28 -- app/version.sh@18 -- # get_header_version minor 00:33:09.253 12:55:28 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:33:09.253 12:55:28 -- app/version.sh@14 -- # cut -f2 00:33:09.253 12:55:28 -- app/version.sh@14 -- # tr -d '"' 00:33:09.253 12:55:28 -- app/version.sh@18 -- # minor=1 00:33:09.253 12:55:28 -- app/version.sh@19 -- # get_header_version patch 00:33:09.253 12:55:28 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:33:09.253 12:55:28 -- app/version.sh@14 -- # cut -f2 00:33:09.253 12:55:28 -- app/version.sh@14 -- # tr -d '"' 00:33:09.253 12:55:28 -- app/version.sh@19 -- # patch=1 00:33:09.253 12:55:28 -- app/version.sh@20 -- # get_header_version suffix 00:33:09.253 12:55:28 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:33:09.253 12:55:28 -- app/version.sh@14 -- # cut -f2 00:33:09.253 12:55:28 -- app/version.sh@14 -- # tr -d '"' 00:33:09.253 12:55:28 -- app/version.sh@20 -- # suffix=-pre 00:33:09.253 12:55:28 -- app/version.sh@22 -- # version=24.1 00:33:09.253 12:55:28 -- app/version.sh@25 -- # (( patch != 0 )) 00:33:09.253 12:55:28 -- app/version.sh@25 -- # version=24.1.1 00:33:09.253 12:55:28 -- app/version.sh@28 -- # version=24.1.1rc0 00:33:09.253 12:55:28 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:33:09.253 12:55:28 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:33:09.253 12:55:28 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:33:09.253 12:55:28 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:33:09.253 ************************************ 00:33:09.253 END TEST version 00:33:09.253 ************************************ 00:33:09.253 00:33:09.253 real 0m0.141s 00:33:09.253 user 0m0.081s 00:33:09.253 sys 0m0.093s 00:33:09.253 12:55:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:09.253 12:55:28 -- common/autotest_common.sh@10 -- # set +x 00:33:09.253 12:55:28 -- spdk/autotest.sh@194 -- # '[' 0 -eq 1 ']' 00:33:09.253 12:55:28 -- spdk/autotest.sh@204 -- # uname -s 00:33:09.253 12:55:28 -- spdk/autotest.sh@204 -- # [[ Linux == Linux ]] 00:33:09.253 12:55:28 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:33:09.253 12:55:28 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:33:09.253 12:55:28 -- spdk/autotest.sh@217 -- # '[' 0 -eq 1 ']' 00:33:09.253 12:55:28 -- spdk/autotest.sh@264 -- # '[' 0 -eq 1 ']' 00:33:09.253 12:55:28 -- spdk/autotest.sh@268 -- # timing_exit lib 00:33:09.253 12:55:28 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:09.253 12:55:28 -- common/autotest_common.sh@10 -- # set +x 00:33:09.513 12:55:28 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:33:09.513 12:55:28 -- spdk/autotest.sh@278 -- # '[' 0 -eq 1 ']' 00:33:09.513 12:55:28 -- spdk/autotest.sh@287 -- # '[' 1 -eq 1 ']' 00:33:09.513 12:55:28 -- spdk/autotest.sh@288 -- # export NET_TYPE 00:33:09.513 12:55:28 -- spdk/autotest.sh@291 -- # '[' tcp = rdma ']' 00:33:09.513 12:55:28 -- spdk/autotest.sh@294 -- # '[' tcp = tcp ']' 00:33:09.513 12:55:28 -- spdk/autotest.sh@295 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:33:09.513 12:55:28 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:33:09.513 12:55:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:09.513 12:55:28 -- common/autotest_common.sh@10 -- # set +x 00:33:09.513 ************************************ 00:33:09.513 START TEST nvmf_tcp 00:33:09.513 ************************************ 00:33:09.513 12:55:28 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:33:09.513 * Looking for test storage... 00:33:09.513 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:33:09.513 12:55:28 -- nvmf/nvmf.sh@10 -- # uname -s 00:33:09.513 12:55:28 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:33:09.513 12:55:28 -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:33:09.513 12:55:28 -- nvmf/common.sh@7 -- # uname -s 00:33:09.513 12:55:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:09.513 12:55:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:09.513 12:55:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:09.513 12:55:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:09.513 12:55:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:09.513 12:55:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:09.513 12:55:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:09.513 12:55:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:09.513 12:55:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:09.513 12:55:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:09.513 12:55:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:864ae4a3-cd96-439b-8ef2-6dab4d992115 00:33:09.513 12:55:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=864ae4a3-cd96-439b-8ef2-6dab4d992115 00:33:09.513 12:55:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:09.513 12:55:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:09.513 12:55:28 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:33:09.513 12:55:28 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:09.513 12:55:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:09.513 12:55:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:09.513 12:55:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:09.513 12:55:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.513 12:55:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.513 12:55:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.513 12:55:28 -- paths/export.sh@5 -- # export PATH 00:33:09.513 12:55:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.513 12:55:28 -- nvmf/common.sh@46 -- # : 0 00:33:09.513 12:55:28 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:33:09.513 12:55:28 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:33:09.513 12:55:28 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:33:09.513 12:55:28 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:09.513 12:55:28 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:09.513 12:55:28 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:33:09.513 12:55:28 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:33:09.513 12:55:28 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:33:09.513 12:55:28 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:33:09.513 12:55:28 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:33:09.513 12:55:28 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:33:09.513 12:55:28 -- common/autotest_common.sh@712 -- # xtrace_disable 00:33:09.513 12:55:28 -- common/autotest_common.sh@10 -- # set +x 00:33:09.513 12:55:28 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:33:09.513 12:55:28 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:33:09.513 12:55:28 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:33:09.513 12:55:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:09.513 12:55:28 -- common/autotest_common.sh@10 -- # set +x 00:33:09.513 ************************************ 00:33:09.513 START TEST nvmf_example 00:33:09.513 ************************************ 00:33:09.513 12:55:28 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:33:09.513 * Looking for test storage... 00:33:09.513 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:33:09.513 12:55:28 -- target/nvmf_example.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:33:09.513 12:55:28 -- nvmf/common.sh@7 -- # uname -s 00:33:09.513 12:55:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:09.513 12:55:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:09.513 12:55:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:09.513 12:55:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:09.513 12:55:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:09.513 12:55:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:09.513 12:55:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:09.513 12:55:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:09.513 12:55:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:09.513 12:55:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:09.513 12:55:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:864ae4a3-cd96-439b-8ef2-6dab4d992115 00:33:09.513 12:55:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=864ae4a3-cd96-439b-8ef2-6dab4d992115 00:33:09.513 12:55:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:09.513 12:55:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:09.513 12:55:28 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:33:09.513 12:55:28 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:09.513 12:55:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:09.513 12:55:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:09.513 12:55:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:09.513 12:55:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.513 12:55:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.513 12:55:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.513 12:55:28 -- paths/export.sh@5 -- # export PATH 00:33:09.513 12:55:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.513 12:55:28 -- nvmf/common.sh@46 -- # : 0 00:33:09.513 12:55:28 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:33:09.513 12:55:28 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:33:09.513 12:55:28 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:33:09.513 12:55:28 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:09.513 12:55:28 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:09.513 12:55:28 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:33:09.513 12:55:28 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:33:09.513 12:55:28 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:33:09.513 12:55:28 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:33:09.513 12:55:28 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:33:09.513 12:55:28 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:33:09.513 12:55:28 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:33:09.513 12:55:28 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:33:09.513 12:55:28 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:33:09.513 12:55:28 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:33:09.513 12:55:28 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:33:09.514 12:55:28 -- common/autotest_common.sh@712 -- # xtrace_disable 00:33:09.514 12:55:28 -- common/autotest_common.sh@10 -- # set +x 00:33:09.514 12:55:28 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:33:09.514 12:55:28 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:33:09.514 12:55:28 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:09.514 12:55:28 -- nvmf/common.sh@436 -- # prepare_net_devs 00:33:09.514 12:55:28 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:33:09.514 12:55:28 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:33:09.514 12:55:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:09.514 12:55:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:09.514 12:55:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:09.514 12:55:28 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:33:09.514 12:55:28 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:33:09.514 12:55:28 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:33:09.514 12:55:28 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:33:09.514 12:55:28 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:33:09.514 12:55:28 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:33:09.514 12:55:28 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:09.514 12:55:28 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:09.514 12:55:28 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:33:09.514 12:55:28 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:33:09.514 12:55:28 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:33:09.514 12:55:28 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:33:09.514 12:55:28 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:33:09.514 12:55:28 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:09.514 12:55:28 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:33:09.514 12:55:28 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:33:09.514 12:55:28 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:33:09.514 12:55:28 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:33:09.514 12:55:28 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:33:09.773 Cannot find device "nvmf_init_br" 00:33:09.773 12:55:28 -- nvmf/common.sh@153 -- # true 00:33:09.773 12:55:28 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:33:09.773 Cannot find device "nvmf_tgt_br" 00:33:09.773 12:55:28 -- nvmf/common.sh@154 -- # true 00:33:09.773 12:55:28 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:33:09.773 Cannot find device "nvmf_tgt_br2" 00:33:09.773 12:55:28 -- nvmf/common.sh@155 -- # true 00:33:09.773 12:55:28 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:33:09.773 Cannot find device "nvmf_init_br" 00:33:09.773 12:55:28 -- nvmf/common.sh@156 -- # true 00:33:09.773 12:55:28 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:33:09.773 Cannot find device "nvmf_tgt_br" 00:33:09.773 12:55:28 -- nvmf/common.sh@157 -- # true 00:33:09.773 12:55:28 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:33:09.773 Cannot find device "nvmf_tgt_br2" 00:33:09.773 12:55:28 -- nvmf/common.sh@158 -- # true 00:33:09.773 12:55:28 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:33:09.773 Cannot find device "nvmf_br" 00:33:09.773 12:55:29 -- nvmf/common.sh@159 -- # true 00:33:09.773 12:55:29 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:33:09.773 Cannot find device "nvmf_init_if" 00:33:09.773 12:55:29 -- nvmf/common.sh@160 -- # true 00:33:09.773 12:55:29 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:33:09.773 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:09.773 12:55:29 -- nvmf/common.sh@161 -- # true 00:33:09.773 12:55:29 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:33:09.773 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:09.773 12:55:29 -- nvmf/common.sh@162 -- # true 00:33:09.773 12:55:29 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:33:09.773 12:55:29 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:33:09.773 12:55:29 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:33:09.773 12:55:29 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:33:09.773 12:55:29 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:33:09.773 12:55:29 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:33:09.773 12:55:29 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:33:09.773 12:55:29 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:33:09.773 12:55:29 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:33:09.773 12:55:29 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:33:09.773 12:55:29 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:33:09.773 12:55:29 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:33:09.773 12:55:29 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:33:09.773 12:55:29 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:33:09.773 12:55:29 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:33:09.773 12:55:29 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:33:09.773 12:55:29 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:33:10.032 12:55:29 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:33:10.032 12:55:29 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:33:10.032 12:55:29 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:33:10.032 12:55:29 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:33:10.032 12:55:29 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:33:10.032 12:55:29 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:33:10.032 12:55:29 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:33:10.032 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:10.032 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.128 ms 00:33:10.032 00:33:10.032 --- 10.0.0.2 ping statistics --- 00:33:10.032 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:10.032 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:33:10.032 12:55:29 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:33:10.032 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:33:10.032 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:33:10.032 00:33:10.032 --- 10.0.0.3 ping statistics --- 00:33:10.032 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:10.032 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:33:10.032 12:55:29 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:33:10.032 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:10.032 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:33:10.032 00:33:10.032 --- 10.0.0.1 ping statistics --- 00:33:10.032 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:10.032 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:33:10.032 12:55:29 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:10.032 12:55:29 -- nvmf/common.sh@421 -- # return 0 00:33:10.032 12:55:29 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:33:10.032 12:55:29 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:10.032 12:55:29 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:33:10.032 12:55:29 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:33:10.032 12:55:29 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:10.032 12:55:29 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:33:10.032 12:55:29 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:33:10.032 12:55:29 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:33:10.032 12:55:29 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:33:10.032 12:55:29 -- common/autotest_common.sh@712 -- # xtrace_disable 00:33:10.032 12:55:29 -- common/autotest_common.sh@10 -- # set +x 00:33:10.032 12:55:29 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:33:10.032 12:55:29 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:33:10.032 12:55:29 -- target/nvmf_example.sh@34 -- # nvmfpid=71756 00:33:10.032 12:55:29 -- target/nvmf_example.sh@33 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:33:10.032 12:55:29 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:10.032 12:55:29 -- target/nvmf_example.sh@36 -- # waitforlisten 71756 00:33:10.032 12:55:29 -- common/autotest_common.sh@819 -- # '[' -z 71756 ']' 00:33:10.032 12:55:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:10.032 12:55:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:33:10.032 12:55:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:10.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:10.032 12:55:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:33:10.032 12:55:29 -- common/autotest_common.sh@10 -- # set +x 00:33:11.406 12:55:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:33:11.406 12:55:30 -- common/autotest_common.sh@852 -- # return 0 00:33:11.406 12:55:30 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:33:11.406 12:55:30 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:11.406 12:55:30 -- common/autotest_common.sh@10 -- # set +x 00:33:11.406 12:55:30 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:11.406 12:55:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:11.406 12:55:30 -- common/autotest_common.sh@10 -- # set +x 00:33:11.406 12:55:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:11.406 12:55:30 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:33:11.406 12:55:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:11.406 12:55:30 -- common/autotest_common.sh@10 -- # set +x 00:33:11.406 12:55:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:11.406 12:55:30 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:33:11.406 12:55:30 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:11.406 12:55:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:11.406 12:55:30 -- common/autotest_common.sh@10 -- # set +x 00:33:11.406 12:55:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:11.406 12:55:30 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:33:11.406 12:55:30 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:11.406 12:55:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:11.406 12:55:30 -- common/autotest_common.sh@10 -- # set +x 00:33:11.406 12:55:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:11.406 12:55:30 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:11.406 12:55:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:11.406 12:55:30 -- common/autotest_common.sh@10 -- # set +x 00:33:11.406 12:55:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:11.406 12:55:30 -- target/nvmf_example.sh@59 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:33:11.406 12:55:30 -- target/nvmf_example.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:33:21.370 Initializing NVMe Controllers 00:33:21.370 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:21.370 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:21.370 Initialization complete. Launching workers. 00:33:21.370 ======================================================== 00:33:21.370 Latency(us) 00:33:21.370 Device Information : IOPS MiB/s Average min max 00:33:21.370 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15386.08 60.10 4159.42 736.19 23548.79 00:33:21.370 ======================================================== 00:33:21.370 Total : 15386.08 60.10 4159.42 736.19 23548.79 00:33:21.370 00:33:21.370 12:55:40 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:33:21.370 12:55:40 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:33:21.370 12:55:40 -- nvmf/common.sh@476 -- # nvmfcleanup 00:33:21.370 12:55:40 -- nvmf/common.sh@116 -- # sync 00:33:21.628 12:55:40 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:33:21.628 12:55:40 -- nvmf/common.sh@119 -- # set +e 00:33:21.628 12:55:40 -- nvmf/common.sh@120 -- # for i in {1..20} 00:33:21.628 12:55:40 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:33:21.628 rmmod nvme_tcp 00:33:21.628 rmmod nvme_fabrics 00:33:21.628 rmmod nvme_keyring 00:33:21.628 12:55:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:33:21.628 12:55:40 -- nvmf/common.sh@123 -- # set -e 00:33:21.628 12:55:40 -- nvmf/common.sh@124 -- # return 0 00:33:21.628 12:55:40 -- nvmf/common.sh@477 -- # '[' -n 71756 ']' 00:33:21.628 12:55:40 -- nvmf/common.sh@478 -- # killprocess 71756 00:33:21.628 12:55:40 -- common/autotest_common.sh@926 -- # '[' -z 71756 ']' 00:33:21.628 12:55:40 -- common/autotest_common.sh@930 -- # kill -0 71756 00:33:21.628 12:55:40 -- common/autotest_common.sh@931 -- # uname 00:33:21.628 12:55:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:33:21.628 12:55:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 71756 00:33:21.628 12:55:40 -- common/autotest_common.sh@932 -- # process_name=nvmf 00:33:21.628 killing process with pid 71756 00:33:21.628 12:55:40 -- common/autotest_common.sh@936 -- # '[' nvmf = sudo ']' 00:33:21.628 12:55:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 71756' 00:33:21.628 12:55:40 -- common/autotest_common.sh@945 -- # kill 71756 00:33:21.628 12:55:40 -- common/autotest_common.sh@950 -- # wait 71756 00:33:21.887 nvmf threads initialize successfully 00:33:21.887 bdev subsystem init successfully 00:33:21.887 created a nvmf target service 00:33:21.887 create targets's poll groups done 00:33:21.887 all subsystems of target started 00:33:21.887 nvmf target is running 00:33:21.887 all subsystems of target stopped 00:33:21.887 destroy targets's poll groups done 00:33:21.887 destroyed the nvmf target service 00:33:21.887 bdev subsystem finish successfully 00:33:21.887 nvmf threads destroy successfully 00:33:21.887 12:55:41 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:33:21.887 12:55:41 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:33:21.887 12:55:41 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:33:21.887 12:55:41 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:21.887 12:55:41 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:33:21.887 12:55:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:21.887 12:55:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:21.887 12:55:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:21.887 12:55:41 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:33:21.887 12:55:41 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:33:21.887 12:55:41 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:21.887 12:55:41 -- common/autotest_common.sh@10 -- # set +x 00:33:21.887 00:33:21.887 real 0m12.386s 00:33:21.887 user 0m44.797s 00:33:21.887 sys 0m1.880s 00:33:21.887 12:55:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:21.887 ************************************ 00:33:21.887 12:55:41 -- common/autotest_common.sh@10 -- # set +x 00:33:21.887 END TEST nvmf_example 00:33:21.887 ************************************ 00:33:21.887 12:55:41 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:33:21.887 12:55:41 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:33:21.887 12:55:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:21.887 12:55:41 -- common/autotest_common.sh@10 -- # set +x 00:33:21.887 ************************************ 00:33:21.887 START TEST nvmf_filesystem 00:33:21.887 ************************************ 00:33:21.887 12:55:41 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:33:22.147 * Looking for test storage... 00:33:22.147 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:33:22.147 12:55:41 -- target/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:33:22.147 12:55:41 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:33:22.147 12:55:41 -- common/autotest_common.sh@34 -- # set -e 00:33:22.147 12:55:41 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:33:22.147 12:55:41 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:33:22.147 12:55:41 -- common/autotest_common.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:33:22.147 12:55:41 -- common/autotest_common.sh@39 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:33:22.147 12:55:41 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:33:22.147 12:55:41 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:33:22.147 12:55:41 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:33:22.147 12:55:41 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:33:22.147 12:55:41 -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:33:22.147 12:55:41 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:33:22.147 12:55:41 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:33:22.147 12:55:41 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:33:22.147 12:55:41 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:33:22.147 12:55:41 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:33:22.147 12:55:41 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:33:22.147 12:55:41 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:33:22.147 12:55:41 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:33:22.148 12:55:41 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:33:22.148 12:55:41 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:33:22.148 12:55:41 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:33:22.148 12:55:41 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:33:22.148 12:55:41 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:33:22.148 12:55:41 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:33:22.148 12:55:41 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:33:22.148 12:55:41 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:33:22.148 12:55:41 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:33:22.148 12:55:41 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:33:22.148 12:55:41 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:33:22.148 12:55:41 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:33:22.148 12:55:41 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:33:22.148 12:55:41 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:33:22.148 12:55:41 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:33:22.148 12:55:41 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:33:22.148 12:55:41 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:33:22.148 12:55:41 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:33:22.148 12:55:41 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:33:22.148 12:55:41 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:33:22.148 12:55:41 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:33:22.148 12:55:41 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:33:22.148 12:55:41 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:33:22.148 12:55:41 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:33:22.148 12:55:41 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:33:22.148 12:55:41 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:33:22.148 12:55:41 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:33:22.148 12:55:41 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:33:22.148 12:55:41 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:33:22.148 12:55:41 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:33:22.148 12:55:41 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:33:22.148 12:55:41 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:33:22.148 12:55:41 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:33:22.148 12:55:41 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:33:22.148 12:55:41 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:33:22.148 12:55:41 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:33:22.148 12:55:41 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:33:22.148 12:55:41 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:33:22.148 12:55:41 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:33:22.148 12:55:41 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:33:22.148 12:55:41 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:33:22.148 12:55:41 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:33:22.148 12:55:41 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:33:22.148 12:55:41 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:33:22.148 12:55:41 -- common/build_config.sh@58 -- # CONFIG_GOLANG=y 00:33:22.148 12:55:41 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:33:22.148 12:55:41 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:33:22.148 12:55:41 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:33:22.148 12:55:41 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:33:22.148 12:55:41 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:33:22.148 12:55:41 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:33:22.148 12:55:41 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:33:22.148 12:55:41 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:33:22.148 12:55:41 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:33:22.148 12:55:41 -- common/build_config.sh@68 -- # CONFIG_AVAHI=y 00:33:22.148 12:55:41 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:33:22.148 12:55:41 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:33:22.148 12:55:41 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:33:22.148 12:55:41 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:33:22.148 12:55:41 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:33:22.148 12:55:41 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:33:22.148 12:55:41 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:33:22.148 12:55:41 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:33:22.148 12:55:41 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:33:22.148 12:55:41 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:33:22.148 12:55:41 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:33:22.148 12:55:41 -- common/autotest_common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:33:22.148 12:55:41 -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:33:22.148 12:55:41 -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:33:22.148 12:55:41 -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:33:22.148 12:55:41 -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:33:22.148 12:55:41 -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:33:22.148 12:55:41 -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:33:22.148 12:55:41 -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:33:22.148 12:55:41 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:33:22.148 12:55:41 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:33:22.148 12:55:41 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:33:22.148 12:55:41 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:33:22.148 12:55:41 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:33:22.148 12:55:41 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:33:22.148 12:55:41 -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:33:22.148 12:55:41 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:33:22.148 #define SPDK_CONFIG_H 00:33:22.148 #define SPDK_CONFIG_APPS 1 00:33:22.148 #define SPDK_CONFIG_ARCH native 00:33:22.148 #undef SPDK_CONFIG_ASAN 00:33:22.148 #define SPDK_CONFIG_AVAHI 1 00:33:22.148 #undef SPDK_CONFIG_CET 00:33:22.148 #define SPDK_CONFIG_COVERAGE 1 00:33:22.148 #define SPDK_CONFIG_CROSS_PREFIX 00:33:22.148 #undef SPDK_CONFIG_CRYPTO 00:33:22.148 #undef SPDK_CONFIG_CRYPTO_MLX5 00:33:22.148 #undef SPDK_CONFIG_CUSTOMOCF 00:33:22.148 #undef SPDK_CONFIG_DAOS 00:33:22.148 #define SPDK_CONFIG_DAOS_DIR 00:33:22.148 #define SPDK_CONFIG_DEBUG 1 00:33:22.148 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:33:22.148 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build 00:33:22.148 #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include 00:33:22.148 #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib 00:33:22.148 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:33:22.148 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:33:22.148 #define SPDK_CONFIG_EXAMPLES 1 00:33:22.148 #undef SPDK_CONFIG_FC 00:33:22.148 #define SPDK_CONFIG_FC_PATH 00:33:22.148 #define SPDK_CONFIG_FIO_PLUGIN 1 00:33:22.148 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:33:22.148 #undef SPDK_CONFIG_FUSE 00:33:22.148 #undef SPDK_CONFIG_FUZZER 00:33:22.148 #define SPDK_CONFIG_FUZZER_LIB 00:33:22.148 #define SPDK_CONFIG_GOLANG 1 00:33:22.148 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:33:22.148 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:33:22.148 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:33:22.148 #undef SPDK_CONFIG_HAVE_LIBBSD 00:33:22.148 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:33:22.148 #define SPDK_CONFIG_IDXD 1 00:33:22.148 #define SPDK_CONFIG_IDXD_KERNEL 1 00:33:22.148 #undef SPDK_CONFIG_IPSEC_MB 00:33:22.148 #define SPDK_CONFIG_IPSEC_MB_DIR 00:33:22.148 #define SPDK_CONFIG_ISAL 1 00:33:22.148 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:33:22.148 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:33:22.148 #define SPDK_CONFIG_LIBDIR 00:33:22.148 #undef SPDK_CONFIG_LTO 00:33:22.148 #define SPDK_CONFIG_MAX_LCORES 00:33:22.148 #define SPDK_CONFIG_NVME_CUSE 1 00:33:22.148 #undef SPDK_CONFIG_OCF 00:33:22.148 #define SPDK_CONFIG_OCF_PATH 00:33:22.148 #define SPDK_CONFIG_OPENSSL_PATH 00:33:22.148 #undef SPDK_CONFIG_PGO_CAPTURE 00:33:22.148 #undef SPDK_CONFIG_PGO_USE 00:33:22.148 #define SPDK_CONFIG_PREFIX /usr/local 00:33:22.148 #undef SPDK_CONFIG_RAID5F 00:33:22.148 #undef SPDK_CONFIG_RBD 00:33:22.148 #define SPDK_CONFIG_RDMA 1 00:33:22.148 #define SPDK_CONFIG_RDMA_PROV verbs 00:33:22.148 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:33:22.148 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:33:22.148 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:33:22.148 #define SPDK_CONFIG_SHARED 1 00:33:22.148 #undef SPDK_CONFIG_SMA 00:33:22.148 #define SPDK_CONFIG_TESTS 1 00:33:22.148 #undef SPDK_CONFIG_TSAN 00:33:22.148 #define SPDK_CONFIG_UBLK 1 00:33:22.148 #define SPDK_CONFIG_UBSAN 1 00:33:22.148 #undef SPDK_CONFIG_UNIT_TESTS 00:33:22.148 #undef SPDK_CONFIG_URING 00:33:22.148 #define SPDK_CONFIG_URING_PATH 00:33:22.148 #undef SPDK_CONFIG_URING_ZNS 00:33:22.148 #define SPDK_CONFIG_USDT 1 00:33:22.148 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:33:22.148 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:33:22.148 #undef SPDK_CONFIG_VFIO_USER 00:33:22.148 #define SPDK_CONFIG_VFIO_USER_DIR 00:33:22.148 #define SPDK_CONFIG_VHOST 1 00:33:22.148 #define SPDK_CONFIG_VIRTIO 1 00:33:22.148 #undef SPDK_CONFIG_VTUNE 00:33:22.148 #define SPDK_CONFIG_VTUNE_DIR 00:33:22.148 #define SPDK_CONFIG_WERROR 1 00:33:22.148 #define SPDK_CONFIG_WPDK_DIR 00:33:22.148 #undef SPDK_CONFIG_XNVME 00:33:22.148 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:33:22.148 12:55:41 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:33:22.148 12:55:41 -- common/autotest_common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:22.148 12:55:41 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:22.149 12:55:41 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:22.149 12:55:41 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:22.149 12:55:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:22.149 12:55:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:22.149 12:55:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:22.149 12:55:41 -- paths/export.sh@5 -- # export PATH 00:33:22.149 12:55:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:22.149 12:55:41 -- common/autotest_common.sh@50 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:33:22.149 12:55:41 -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:33:22.149 12:55:41 -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:33:22.149 12:55:41 -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:33:22.149 12:55:41 -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:33:22.149 12:55:41 -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:33:22.149 12:55:41 -- pm/common@16 -- # TEST_TAG=N/A 00:33:22.149 12:55:41 -- pm/common@17 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:33:22.149 12:55:41 -- common/autotest_common.sh@52 -- # : 1 00:33:22.149 12:55:41 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:33:22.149 12:55:41 -- common/autotest_common.sh@56 -- # : 0 00:33:22.149 12:55:41 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:33:22.149 12:55:41 -- common/autotest_common.sh@58 -- # : 0 00:33:22.149 12:55:41 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:33:22.149 12:55:41 -- common/autotest_common.sh@60 -- # : 1 00:33:22.149 12:55:41 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:33:22.149 12:55:41 -- common/autotest_common.sh@62 -- # : 0 00:33:22.149 12:55:41 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:33:22.149 12:55:41 -- common/autotest_common.sh@64 -- # : 00:33:22.149 12:55:41 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:33:22.149 12:55:41 -- common/autotest_common.sh@66 -- # : 0 00:33:22.149 12:55:41 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:33:22.149 12:55:41 -- common/autotest_common.sh@68 -- # : 0 00:33:22.149 12:55:41 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:33:22.149 12:55:41 -- common/autotest_common.sh@70 -- # : 0 00:33:22.149 12:55:41 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:33:22.149 12:55:41 -- common/autotest_common.sh@72 -- # : 0 00:33:22.149 12:55:41 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:33:22.149 12:55:41 -- common/autotest_common.sh@74 -- # : 0 00:33:22.149 12:55:41 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:33:22.149 12:55:41 -- common/autotest_common.sh@76 -- # : 0 00:33:22.149 12:55:41 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:33:22.149 12:55:41 -- common/autotest_common.sh@78 -- # : 0 00:33:22.149 12:55:41 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:33:22.149 12:55:41 -- common/autotest_common.sh@80 -- # : 0 00:33:22.149 12:55:41 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:33:22.149 12:55:41 -- common/autotest_common.sh@82 -- # : 0 00:33:22.149 12:55:41 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:33:22.149 12:55:41 -- common/autotest_common.sh@84 -- # : 0 00:33:22.149 12:55:41 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:33:22.149 12:55:41 -- common/autotest_common.sh@86 -- # : 1 00:33:22.149 12:55:41 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:33:22.149 12:55:41 -- common/autotest_common.sh@88 -- # : 0 00:33:22.149 12:55:41 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:33:22.149 12:55:41 -- common/autotest_common.sh@90 -- # : 0 00:33:22.149 12:55:41 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:33:22.149 12:55:41 -- common/autotest_common.sh@92 -- # : 0 00:33:22.149 12:55:41 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:33:22.149 12:55:41 -- common/autotest_common.sh@94 -- # : 0 00:33:22.149 12:55:41 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:33:22.149 12:55:41 -- common/autotest_common.sh@96 -- # : tcp 00:33:22.149 12:55:41 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:33:22.149 12:55:41 -- common/autotest_common.sh@98 -- # : 0 00:33:22.149 12:55:41 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:33:22.149 12:55:41 -- common/autotest_common.sh@100 -- # : 0 00:33:22.149 12:55:41 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:33:22.149 12:55:41 -- common/autotest_common.sh@102 -- # : 0 00:33:22.149 12:55:41 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:33:22.149 12:55:41 -- common/autotest_common.sh@104 -- # : 0 00:33:22.149 12:55:41 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:33:22.149 12:55:41 -- common/autotest_common.sh@106 -- # : 0 00:33:22.149 12:55:41 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:33:22.149 12:55:41 -- common/autotest_common.sh@108 -- # : 0 00:33:22.149 12:55:41 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:33:22.149 12:55:41 -- common/autotest_common.sh@110 -- # : 0 00:33:22.149 12:55:41 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:33:22.149 12:55:41 -- common/autotest_common.sh@112 -- # : 0 00:33:22.149 12:55:41 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:33:22.149 12:55:41 -- common/autotest_common.sh@114 -- # : 0 00:33:22.149 12:55:41 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:33:22.149 12:55:41 -- common/autotest_common.sh@116 -- # : 1 00:33:22.149 12:55:41 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:33:22.149 12:55:41 -- common/autotest_common.sh@118 -- # : /home/vagrant/spdk_repo/dpdk/build 00:33:22.149 12:55:41 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:33:22.149 12:55:41 -- common/autotest_common.sh@120 -- # : 0 00:33:22.149 12:55:41 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:33:22.149 12:55:41 -- common/autotest_common.sh@122 -- # : 0 00:33:22.149 12:55:41 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:33:22.149 12:55:41 -- common/autotest_common.sh@124 -- # : 0 00:33:22.149 12:55:41 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:33:22.149 12:55:41 -- common/autotest_common.sh@126 -- # : 0 00:33:22.149 12:55:41 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:33:22.149 12:55:41 -- common/autotest_common.sh@128 -- # : 0 00:33:22.149 12:55:41 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:33:22.149 12:55:41 -- common/autotest_common.sh@130 -- # : 0 00:33:22.149 12:55:41 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:33:22.149 12:55:41 -- common/autotest_common.sh@132 -- # : v22.11.4 00:33:22.149 12:55:41 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:33:22.149 12:55:41 -- common/autotest_common.sh@134 -- # : true 00:33:22.149 12:55:41 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:33:22.149 12:55:41 -- common/autotest_common.sh@136 -- # : 0 00:33:22.149 12:55:41 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:33:22.149 12:55:41 -- common/autotest_common.sh@138 -- # : 0 00:33:22.149 12:55:41 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:33:22.149 12:55:41 -- common/autotest_common.sh@140 -- # : 1 00:33:22.149 12:55:41 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:33:22.149 12:55:41 -- common/autotest_common.sh@142 -- # : 0 00:33:22.149 12:55:41 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:33:22.149 12:55:41 -- common/autotest_common.sh@144 -- # : 0 00:33:22.149 12:55:41 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:33:22.149 12:55:41 -- common/autotest_common.sh@146 -- # : 0 00:33:22.149 12:55:41 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:33:22.149 12:55:41 -- common/autotest_common.sh@148 -- # : 00:33:22.149 12:55:41 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:33:22.149 12:55:41 -- common/autotest_common.sh@150 -- # : 0 00:33:22.149 12:55:41 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:33:22.149 12:55:41 -- common/autotest_common.sh@152 -- # : 0 00:33:22.149 12:55:41 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:33:22.149 12:55:41 -- common/autotest_common.sh@154 -- # : 0 00:33:22.149 12:55:41 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:33:22.149 12:55:41 -- common/autotest_common.sh@156 -- # : 0 00:33:22.149 12:55:41 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:33:22.149 12:55:41 -- common/autotest_common.sh@158 -- # : 0 00:33:22.149 12:55:41 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:33:22.149 12:55:41 -- common/autotest_common.sh@160 -- # : 0 00:33:22.149 12:55:41 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:33:22.149 12:55:41 -- common/autotest_common.sh@163 -- # : 00:33:22.149 12:55:41 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:33:22.149 12:55:41 -- common/autotest_common.sh@165 -- # : 1 00:33:22.150 12:55:41 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:33:22.150 12:55:41 -- common/autotest_common.sh@167 -- # : 1 00:33:22.150 12:55:41 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:33:22.150 12:55:41 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:33:22.150 12:55:41 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:33:22.150 12:55:41 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:33:22.150 12:55:41 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:33:22.150 12:55:41 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:33:22.150 12:55:41 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:33:22.150 12:55:41 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:33:22.150 12:55:41 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:33:22.150 12:55:41 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:33:22.150 12:55:41 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:33:22.150 12:55:41 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:33:22.150 12:55:41 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:33:22.150 12:55:41 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:33:22.150 12:55:41 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:33:22.150 12:55:41 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:33:22.150 12:55:41 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:33:22.150 12:55:41 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:33:22.150 12:55:41 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:33:22.150 12:55:41 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:33:22.150 12:55:41 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:33:22.150 12:55:41 -- common/autotest_common.sh@196 -- # cat 00:33:22.150 12:55:41 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:33:22.150 12:55:41 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:33:22.150 12:55:41 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:33:22.150 12:55:41 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:33:22.150 12:55:41 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:33:22.150 12:55:41 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:33:22.150 12:55:41 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:33:22.150 12:55:41 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:33:22.150 12:55:41 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:33:22.150 12:55:41 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:33:22.150 12:55:41 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:33:22.150 12:55:41 -- common/autotest_common.sh@239 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:33:22.150 12:55:41 -- common/autotest_common.sh@239 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:33:22.150 12:55:41 -- common/autotest_common.sh@240 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:33:22.150 12:55:41 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:33:22.150 12:55:41 -- common/autotest_common.sh@242 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:33:22.150 12:55:41 -- common/autotest_common.sh@242 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:33:22.150 12:55:41 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:33:22.150 12:55:41 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:33:22.150 12:55:41 -- common/autotest_common.sh@248 -- # '[' 0 -eq 0 ']' 00:33:22.150 12:55:41 -- common/autotest_common.sh@249 -- # export valgrind= 00:33:22.150 12:55:41 -- common/autotest_common.sh@249 -- # valgrind= 00:33:22.150 12:55:41 -- common/autotest_common.sh@255 -- # uname -s 00:33:22.150 12:55:41 -- common/autotest_common.sh@255 -- # '[' Linux = Linux ']' 00:33:22.150 12:55:41 -- common/autotest_common.sh@256 -- # HUGEMEM=4096 00:33:22.150 12:55:41 -- common/autotest_common.sh@257 -- # export CLEAR_HUGE=yes 00:33:22.150 12:55:41 -- common/autotest_common.sh@257 -- # CLEAR_HUGE=yes 00:33:22.150 12:55:41 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:33:22.150 12:55:41 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:33:22.150 12:55:41 -- common/autotest_common.sh@265 -- # MAKE=make 00:33:22.150 12:55:41 -- common/autotest_common.sh@266 -- # MAKEFLAGS=-j10 00:33:22.150 12:55:41 -- common/autotest_common.sh@282 -- # export HUGEMEM=4096 00:33:22.150 12:55:41 -- common/autotest_common.sh@282 -- # HUGEMEM=4096 00:33:22.150 12:55:41 -- common/autotest_common.sh@284 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:33:22.150 12:55:41 -- common/autotest_common.sh@289 -- # NO_HUGE=() 00:33:22.150 12:55:41 -- common/autotest_common.sh@290 -- # TEST_MODE= 00:33:22.150 12:55:41 -- common/autotest_common.sh@291 -- # for i in "$@" 00:33:22.150 12:55:41 -- common/autotest_common.sh@292 -- # case "$i" in 00:33:22.150 12:55:41 -- common/autotest_common.sh@297 -- # TEST_TRANSPORT=tcp 00:33:22.150 12:55:41 -- common/autotest_common.sh@309 -- # [[ -z 71997 ]] 00:33:22.150 12:55:41 -- common/autotest_common.sh@309 -- # kill -0 71997 00:33:22.150 12:55:41 -- common/autotest_common.sh@1665 -- # set_test_storage 2147483648 00:33:22.150 12:55:41 -- common/autotest_common.sh@319 -- # [[ -v testdir ]] 00:33:22.150 12:55:41 -- common/autotest_common.sh@321 -- # local requested_size=2147483648 00:33:22.150 12:55:41 -- common/autotest_common.sh@322 -- # local mount target_dir 00:33:22.150 12:55:41 -- common/autotest_common.sh@324 -- # local -A mounts fss sizes avails uses 00:33:22.150 12:55:41 -- common/autotest_common.sh@325 -- # local source fs size avail mount use 00:33:22.150 12:55:41 -- common/autotest_common.sh@327 -- # local storage_fallback storage_candidates 00:33:22.150 12:55:41 -- common/autotest_common.sh@329 -- # mktemp -udt spdk.XXXXXX 00:33:22.150 12:55:41 -- common/autotest_common.sh@329 -- # storage_fallback=/tmp/spdk.JXBZxh 00:33:22.150 12:55:41 -- common/autotest_common.sh@334 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:33:22.150 12:55:41 -- common/autotest_common.sh@336 -- # [[ -n '' ]] 00:33:22.150 12:55:41 -- common/autotest_common.sh@341 -- # [[ -n '' ]] 00:33:22.150 12:55:41 -- common/autotest_common.sh@346 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvmf/target /tmp/spdk.JXBZxh/tests/target /tmp/spdk.JXBZxh 00:33:22.150 12:55:41 -- common/autotest_common.sh@349 -- # requested_size=2214592512 00:33:22.150 12:55:41 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:33:22.150 12:55:41 -- common/autotest_common.sh@318 -- # grep -v Filesystem 00:33:22.150 12:55:41 -- common/autotest_common.sh@318 -- # df -T 00:33:22.150 12:55:41 -- common/autotest_common.sh@352 -- # mounts["$mount"]=devtmpfs 00:33:22.150 12:55:41 -- common/autotest_common.sh@352 -- # fss["$mount"]=devtmpfs 00:33:22.150 12:55:41 -- common/autotest_common.sh@353 -- # avails["$mount"]=4194304 00:33:22.150 12:55:41 -- common/autotest_common.sh@353 -- # sizes["$mount"]=4194304 00:33:22.150 12:55:41 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:33:22.150 12:55:41 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:33:22.150 12:55:41 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:33:22.150 12:55:41 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:33:22.150 12:55:41 -- common/autotest_common.sh@353 -- # avails["$mount"]=6266634240 00:33:22.150 12:55:41 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6267891712 00:33:22.150 12:55:41 -- common/autotest_common.sh@354 -- # uses["$mount"]=1257472 00:33:22.150 12:55:41 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:33:22.150 12:55:41 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:33:22.150 12:55:41 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:33:22.150 12:55:41 -- common/autotest_common.sh@353 -- # avails["$mount"]=2494353408 00:33:22.150 12:55:41 -- common/autotest_common.sh@353 -- # sizes["$mount"]=2507157504 00:33:22.150 12:55:41 -- common/autotest_common.sh@354 -- # uses["$mount"]=12804096 00:33:22.150 12:55:41 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:33:22.150 12:55:41 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda5 00:33:22.150 12:55:41 -- common/autotest_common.sh@352 -- # fss["$mount"]=btrfs 00:33:22.150 12:55:41 -- common/autotest_common.sh@353 -- # avails["$mount"]=12133134336 00:33:22.150 12:55:41 -- common/autotest_common.sh@353 -- # sizes["$mount"]=20314062848 00:33:22.150 12:55:41 -- common/autotest_common.sh@354 -- # uses["$mount"]=5839671296 00:33:22.150 12:55:41 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:33:22.151 12:55:41 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda5 00:33:22.151 12:55:41 -- common/autotest_common.sh@352 -- # fss["$mount"]=btrfs 00:33:22.151 12:55:41 -- common/autotest_common.sh@353 -- # avails["$mount"]=12133134336 00:33:22.151 12:55:41 -- common/autotest_common.sh@353 -- # sizes["$mount"]=20314062848 00:33:22.151 12:55:41 -- common/autotest_common.sh@354 -- # uses["$mount"]=5839671296 00:33:22.151 12:55:41 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:33:22.151 12:55:41 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:33:22.151 12:55:41 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:33:22.151 12:55:41 -- common/autotest_common.sh@353 -- # avails["$mount"]=6267756544 00:33:22.151 12:55:41 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6267895808 00:33:22.151 12:55:41 -- common/autotest_common.sh@354 -- # uses["$mount"]=139264 00:33:22.151 12:55:41 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:33:22.151 12:55:41 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda2 00:33:22.151 12:55:41 -- common/autotest_common.sh@352 -- # fss["$mount"]=ext4 00:33:22.151 12:55:41 -- common/autotest_common.sh@353 -- # avails["$mount"]=843546624 00:33:22.151 12:55:41 -- common/autotest_common.sh@353 -- # sizes["$mount"]=1012768768 00:33:22.151 12:55:41 -- common/autotest_common.sh@354 -- # uses["$mount"]=100016128 00:33:22.151 12:55:41 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:33:22.151 12:55:41 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda3 00:33:22.151 12:55:41 -- common/autotest_common.sh@352 -- # fss["$mount"]=vfat 00:33:22.151 12:55:41 -- common/autotest_common.sh@353 -- # avails["$mount"]=92499968 00:33:22.151 12:55:41 -- common/autotest_common.sh@353 -- # sizes["$mount"]=104607744 00:33:22.151 12:55:41 -- common/autotest_common.sh@354 -- # uses["$mount"]=12107776 00:33:22.151 12:55:41 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:33:22.151 12:55:41 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:33:22.151 12:55:41 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:33:22.151 12:55:41 -- common/autotest_common.sh@353 -- # avails["$mount"]=1253572608 00:33:22.151 12:55:41 -- common/autotest_common.sh@353 -- # sizes["$mount"]=1253576704 00:33:22.151 12:55:41 -- common/autotest_common.sh@354 -- # uses["$mount"]=4096 00:33:22.151 12:55:41 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:33:22.151 12:55:41 -- common/autotest_common.sh@352 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt/output 00:33:22.151 12:55:41 -- common/autotest_common.sh@352 -- # fss["$mount"]=fuse.sshfs 00:33:22.151 12:55:41 -- common/autotest_common.sh@353 -- # avails["$mount"]=92440453120 00:33:22.151 12:55:41 -- common/autotest_common.sh@353 -- # sizes["$mount"]=105088212992 00:33:22.151 12:55:41 -- common/autotest_common.sh@354 -- # uses["$mount"]=7262326784 00:33:22.151 12:55:41 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:33:22.151 12:55:41 -- common/autotest_common.sh@357 -- # printf '* Looking for test storage...\n' 00:33:22.151 * Looking for test storage... 00:33:22.151 12:55:41 -- common/autotest_common.sh@359 -- # local target_space new_size 00:33:22.151 12:55:41 -- common/autotest_common.sh@360 -- # for target_dir in "${storage_candidates[@]}" 00:33:22.151 12:55:41 -- common/autotest_common.sh@363 -- # awk '$1 !~ /Filesystem/{print $6}' 00:33:22.151 12:55:41 -- common/autotest_common.sh@363 -- # df /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:33:22.151 12:55:41 -- common/autotest_common.sh@363 -- # mount=/home 00:33:22.151 12:55:41 -- common/autotest_common.sh@365 -- # target_space=12133134336 00:33:22.151 12:55:41 -- common/autotest_common.sh@366 -- # (( target_space == 0 || target_space < requested_size )) 00:33:22.151 12:55:41 -- common/autotest_common.sh@369 -- # (( target_space >= requested_size )) 00:33:22.151 12:55:41 -- common/autotest_common.sh@371 -- # [[ btrfs == tmpfs ]] 00:33:22.151 12:55:41 -- common/autotest_common.sh@371 -- # [[ btrfs == ramfs ]] 00:33:22.151 12:55:41 -- common/autotest_common.sh@371 -- # [[ /home == / ]] 00:33:22.151 12:55:41 -- common/autotest_common.sh@378 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:33:22.151 12:55:41 -- common/autotest_common.sh@378 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:33:22.151 12:55:41 -- common/autotest_common.sh@379 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:33:22.151 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:33:22.151 12:55:41 -- common/autotest_common.sh@380 -- # return 0 00:33:22.151 12:55:41 -- common/autotest_common.sh@1667 -- # set -o errtrace 00:33:22.151 12:55:41 -- common/autotest_common.sh@1668 -- # shopt -s extdebug 00:33:22.151 12:55:41 -- common/autotest_common.sh@1669 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:33:22.151 12:55:41 -- common/autotest_common.sh@1671 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:33:22.151 12:55:41 -- common/autotest_common.sh@1672 -- # true 00:33:22.151 12:55:41 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:33:22.151 12:55:41 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:33:22.151 12:55:41 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:33:22.151 12:55:41 -- common/autotest_common.sh@27 -- # exec 00:33:22.151 12:55:41 -- common/autotest_common.sh@29 -- # exec 00:33:22.151 12:55:41 -- common/autotest_common.sh@31 -- # xtrace_restore 00:33:22.151 12:55:41 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:33:22.151 12:55:41 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:33:22.151 12:55:41 -- common/autotest_common.sh@18 -- # set -x 00:33:22.151 12:55:41 -- target/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:33:22.151 12:55:41 -- nvmf/common.sh@7 -- # uname -s 00:33:22.151 12:55:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:22.151 12:55:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:22.151 12:55:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:22.151 12:55:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:22.151 12:55:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:22.151 12:55:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:22.151 12:55:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:22.151 12:55:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:22.151 12:55:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:22.151 12:55:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:22.151 12:55:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:864ae4a3-cd96-439b-8ef2-6dab4d992115 00:33:22.151 12:55:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=864ae4a3-cd96-439b-8ef2-6dab4d992115 00:33:22.151 12:55:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:22.151 12:55:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:22.151 12:55:41 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:33:22.151 12:55:41 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:22.151 12:55:41 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:22.151 12:55:41 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:22.151 12:55:41 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:22.151 12:55:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:22.151 12:55:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:22.151 12:55:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:22.151 12:55:41 -- paths/export.sh@5 -- # export PATH 00:33:22.151 12:55:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:22.151 12:55:41 -- nvmf/common.sh@46 -- # : 0 00:33:22.151 12:55:41 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:33:22.151 12:55:41 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:33:22.151 12:55:41 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:33:22.151 12:55:41 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:22.151 12:55:41 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:22.151 12:55:41 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:33:22.151 12:55:41 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:33:22.151 12:55:41 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:33:22.151 12:55:41 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:33:22.151 12:55:41 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:33:22.151 12:55:41 -- target/filesystem.sh@15 -- # nvmftestinit 00:33:22.151 12:55:41 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:33:22.151 12:55:41 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:22.151 12:55:41 -- nvmf/common.sh@436 -- # prepare_net_devs 00:33:22.151 12:55:41 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:33:22.151 12:55:41 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:33:22.151 12:55:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:22.151 12:55:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:22.151 12:55:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:22.151 12:55:41 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:33:22.151 12:55:41 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:33:22.151 12:55:41 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:33:22.152 12:55:41 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:33:22.152 12:55:41 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:33:22.152 12:55:41 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:33:22.152 12:55:41 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:22.152 12:55:41 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:22.152 12:55:41 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:33:22.152 12:55:41 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:33:22.152 12:55:41 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:33:22.152 12:55:41 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:33:22.152 12:55:41 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:33:22.152 12:55:41 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:22.152 12:55:41 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:33:22.152 12:55:41 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:33:22.152 12:55:41 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:33:22.152 12:55:41 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:33:22.152 12:55:41 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:33:22.152 12:55:41 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:33:22.152 Cannot find device "nvmf_tgt_br" 00:33:22.152 12:55:41 -- nvmf/common.sh@154 -- # true 00:33:22.152 12:55:41 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:33:22.152 Cannot find device "nvmf_tgt_br2" 00:33:22.152 12:55:41 -- nvmf/common.sh@155 -- # true 00:33:22.152 12:55:41 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:33:22.152 12:55:41 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:33:22.152 Cannot find device "nvmf_tgt_br" 00:33:22.152 12:55:41 -- nvmf/common.sh@157 -- # true 00:33:22.152 12:55:41 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:33:22.152 Cannot find device "nvmf_tgt_br2" 00:33:22.152 12:55:41 -- nvmf/common.sh@158 -- # true 00:33:22.152 12:55:41 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:33:22.152 12:55:41 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:33:22.410 12:55:41 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:33:22.410 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:22.410 12:55:41 -- nvmf/common.sh@161 -- # true 00:33:22.410 12:55:41 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:33:22.410 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:22.410 12:55:41 -- nvmf/common.sh@162 -- # true 00:33:22.410 12:55:41 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:33:22.410 12:55:41 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:33:22.410 12:55:41 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:33:22.410 12:55:41 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:33:22.410 12:55:41 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:33:22.410 12:55:41 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:33:22.410 12:55:41 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:33:22.410 12:55:41 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:33:22.410 12:55:41 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:33:22.410 12:55:41 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:33:22.410 12:55:41 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:33:22.410 12:55:41 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:33:22.410 12:55:41 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:33:22.410 12:55:41 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:33:22.410 12:55:41 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:33:22.410 12:55:41 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:33:22.410 12:55:41 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:33:22.410 12:55:41 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:33:22.410 12:55:41 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:33:22.410 12:55:41 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:33:22.410 12:55:41 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:33:22.411 12:55:41 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:33:22.411 12:55:41 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:33:22.411 12:55:41 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:33:22.411 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:22.411 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.104 ms 00:33:22.411 00:33:22.411 --- 10.0.0.2 ping statistics --- 00:33:22.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:22.411 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:33:22.411 12:55:41 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:33:22.411 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:33:22.411 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:33:22.411 00:33:22.411 --- 10.0.0.3 ping statistics --- 00:33:22.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:22.411 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:33:22.411 12:55:41 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:33:22.411 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:22.411 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:33:22.411 00:33:22.411 --- 10.0.0.1 ping statistics --- 00:33:22.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:22.411 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:33:22.411 12:55:41 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:22.411 12:55:41 -- nvmf/common.sh@421 -- # return 0 00:33:22.411 12:55:41 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:33:22.411 12:55:41 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:22.411 12:55:41 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:33:22.411 12:55:41 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:33:22.411 12:55:41 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:22.411 12:55:41 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:33:22.411 12:55:41 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:33:22.669 12:55:41 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:33:22.669 12:55:41 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:33:22.669 12:55:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:22.669 12:55:41 -- common/autotest_common.sh@10 -- # set +x 00:33:22.669 ************************************ 00:33:22.669 START TEST nvmf_filesystem_no_in_capsule 00:33:22.669 ************************************ 00:33:22.669 12:55:41 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_part 0 00:33:22.669 12:55:41 -- target/filesystem.sh@47 -- # in_capsule=0 00:33:22.669 12:55:41 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:33:22.669 12:55:41 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:33:22.669 12:55:41 -- common/autotest_common.sh@712 -- # xtrace_disable 00:33:22.669 12:55:41 -- common/autotest_common.sh@10 -- # set +x 00:33:22.669 12:55:41 -- nvmf/common.sh@469 -- # nvmfpid=72155 00:33:22.669 12:55:41 -- nvmf/common.sh@470 -- # waitforlisten 72155 00:33:22.669 12:55:41 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:33:22.669 12:55:41 -- common/autotest_common.sh@819 -- # '[' -z 72155 ']' 00:33:22.669 12:55:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:22.669 12:55:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:33:22.669 12:55:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:22.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:22.669 12:55:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:33:22.669 12:55:41 -- common/autotest_common.sh@10 -- # set +x 00:33:22.669 [2024-07-22 12:55:41.913219] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:33:22.669 [2024-07-22 12:55:41.913333] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:22.669 [2024-07-22 12:55:42.062551] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:22.928 [2024-07-22 12:55:42.165661] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:33:22.928 [2024-07-22 12:55:42.165854] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:22.928 [2024-07-22 12:55:42.165880] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:22.928 [2024-07-22 12:55:42.165897] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:22.928 [2024-07-22 12:55:42.166017] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:22.928 [2024-07-22 12:55:42.166715] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:33:22.928 [2024-07-22 12:55:42.166835] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:33:22.928 [2024-07-22 12:55:42.166855] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:23.495 12:55:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:33:23.495 12:55:42 -- common/autotest_common.sh@852 -- # return 0 00:33:23.495 12:55:42 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:33:23.495 12:55:42 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:23.495 12:55:42 -- common/autotest_common.sh@10 -- # set +x 00:33:23.495 12:55:42 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:23.495 12:55:42 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:33:23.495 12:55:42 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:33:23.495 12:55:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:23.495 12:55:42 -- common/autotest_common.sh@10 -- # set +x 00:33:23.495 [2024-07-22 12:55:42.862815] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:23.495 12:55:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:23.495 12:55:42 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:33:23.495 12:55:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:23.495 12:55:42 -- common/autotest_common.sh@10 -- # set +x 00:33:23.756 Malloc1 00:33:23.756 12:55:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:23.756 12:55:43 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:33:23.756 12:55:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:23.756 12:55:43 -- common/autotest_common.sh@10 -- # set +x 00:33:23.756 12:55:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:23.756 12:55:43 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:33:23.756 12:55:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:23.756 12:55:43 -- common/autotest_common.sh@10 -- # set +x 00:33:23.756 12:55:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:23.756 12:55:43 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:23.756 12:55:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:23.756 12:55:43 -- common/autotest_common.sh@10 -- # set +x 00:33:23.756 [2024-07-22 12:55:43.051932] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:23.756 12:55:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:23.756 12:55:43 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:33:23.756 12:55:43 -- common/autotest_common.sh@1357 -- # local bdev_name=Malloc1 00:33:23.756 12:55:43 -- common/autotest_common.sh@1358 -- # local bdev_info 00:33:23.756 12:55:43 -- common/autotest_common.sh@1359 -- # local bs 00:33:23.756 12:55:43 -- common/autotest_common.sh@1360 -- # local nb 00:33:23.756 12:55:43 -- common/autotest_common.sh@1361 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:33:23.756 12:55:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:23.756 12:55:43 -- common/autotest_common.sh@10 -- # set +x 00:33:23.757 12:55:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:23.757 12:55:43 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:33:23.757 { 00:33:23.757 "aliases": [ 00:33:23.757 "9fad19ca-e47d-4910-80ca-8b97b06c94ba" 00:33:23.757 ], 00:33:23.757 "assigned_rate_limits": { 00:33:23.757 "r_mbytes_per_sec": 0, 00:33:23.757 "rw_ios_per_sec": 0, 00:33:23.757 "rw_mbytes_per_sec": 0, 00:33:23.757 "w_mbytes_per_sec": 0 00:33:23.757 }, 00:33:23.757 "block_size": 512, 00:33:23.757 "claim_type": "exclusive_write", 00:33:23.757 "claimed": true, 00:33:23.757 "driver_specific": {}, 00:33:23.757 "memory_domains": [ 00:33:23.757 { 00:33:23.757 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:23.757 "dma_device_type": 2 00:33:23.757 } 00:33:23.757 ], 00:33:23.757 "name": "Malloc1", 00:33:23.757 "num_blocks": 1048576, 00:33:23.757 "product_name": "Malloc disk", 00:33:23.757 "supported_io_types": { 00:33:23.757 "abort": true, 00:33:23.757 "compare": false, 00:33:23.757 "compare_and_write": false, 00:33:23.757 "flush": true, 00:33:23.757 "nvme_admin": false, 00:33:23.757 "nvme_io": false, 00:33:23.757 "read": true, 00:33:23.757 "reset": true, 00:33:23.757 "unmap": true, 00:33:23.757 "write": true, 00:33:23.757 "write_zeroes": true 00:33:23.757 }, 00:33:23.757 "uuid": "9fad19ca-e47d-4910-80ca-8b97b06c94ba", 00:33:23.757 "zoned": false 00:33:23.757 } 00:33:23.757 ]' 00:33:23.757 12:55:43 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:33:23.757 12:55:43 -- common/autotest_common.sh@1362 -- # bs=512 00:33:23.757 12:55:43 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:33:23.757 12:55:43 -- common/autotest_common.sh@1363 -- # nb=1048576 00:33:23.757 12:55:43 -- common/autotest_common.sh@1366 -- # bdev_size=512 00:33:23.757 12:55:43 -- common/autotest_common.sh@1367 -- # echo 512 00:33:23.757 12:55:43 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:33:23.757 12:55:43 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:864ae4a3-cd96-439b-8ef2-6dab4d992115 --hostid=864ae4a3-cd96-439b-8ef2-6dab4d992115 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:33:24.018 12:55:43 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:33:24.018 12:55:43 -- common/autotest_common.sh@1177 -- # local i=0 00:33:24.019 12:55:43 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:33:24.019 12:55:43 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:33:24.019 12:55:43 -- common/autotest_common.sh@1184 -- # sleep 2 00:33:26.547 12:55:45 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:33:26.547 12:55:45 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:33:26.547 12:55:45 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:33:26.547 12:55:45 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:33:26.547 12:55:45 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:33:26.547 12:55:45 -- common/autotest_common.sh@1187 -- # return 0 00:33:26.547 12:55:45 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:33:26.547 12:55:45 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:33:26.547 12:55:45 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:33:26.547 12:55:45 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:33:26.547 12:55:45 -- setup/common.sh@76 -- # local dev=nvme0n1 00:33:26.547 12:55:45 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:33:26.547 12:55:45 -- setup/common.sh@80 -- # echo 536870912 00:33:26.547 12:55:45 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:33:26.547 12:55:45 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:33:26.547 12:55:45 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:33:26.547 12:55:45 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:33:26.547 12:55:45 -- target/filesystem.sh@69 -- # partprobe 00:33:26.547 12:55:45 -- target/filesystem.sh@70 -- # sleep 1 00:33:27.483 12:55:46 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:33:27.483 12:55:46 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:33:27.483 12:55:46 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:33:27.483 12:55:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:27.483 12:55:46 -- common/autotest_common.sh@10 -- # set +x 00:33:27.483 ************************************ 00:33:27.483 START TEST filesystem_ext4 00:33:27.483 ************************************ 00:33:27.483 12:55:46 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create ext4 nvme0n1 00:33:27.483 12:55:46 -- target/filesystem.sh@18 -- # fstype=ext4 00:33:27.483 12:55:46 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:33:27.483 12:55:46 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:33:27.483 12:55:46 -- common/autotest_common.sh@902 -- # local fstype=ext4 00:33:27.483 12:55:46 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:33:27.483 12:55:46 -- common/autotest_common.sh@904 -- # local i=0 00:33:27.483 12:55:46 -- common/autotest_common.sh@905 -- # local force 00:33:27.483 12:55:46 -- common/autotest_common.sh@907 -- # '[' ext4 = ext4 ']' 00:33:27.483 12:55:46 -- common/autotest_common.sh@908 -- # force=-F 00:33:27.483 12:55:46 -- common/autotest_common.sh@913 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:33:27.483 mke2fs 1.46.5 (30-Dec-2021) 00:33:27.483 Discarding device blocks: 0/522240 done 00:33:27.483 Creating filesystem with 522240 1k blocks and 130560 inodes 00:33:27.483 Filesystem UUID: ed506964-c45d-40be-9148-61b9084ec796 00:33:27.483 Superblock backups stored on blocks: 00:33:27.483 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:33:27.483 00:33:27.483 Allocating group tables: 0/64 done 00:33:27.483 Writing inode tables: 0/64 done 00:33:27.483 Creating journal (8192 blocks): done 00:33:27.483 Writing superblocks and filesystem accounting information: 0/64 done 00:33:27.483 00:33:27.483 12:55:46 -- common/autotest_common.sh@921 -- # return 0 00:33:27.483 12:55:46 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:33:27.483 12:55:46 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:33:27.483 12:55:46 -- target/filesystem.sh@25 -- # sync 00:33:27.483 12:55:46 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:33:27.483 12:55:46 -- target/filesystem.sh@27 -- # sync 00:33:27.483 12:55:46 -- target/filesystem.sh@29 -- # i=0 00:33:27.483 12:55:46 -- target/filesystem.sh@30 -- # umount /mnt/device 00:33:27.483 12:55:46 -- target/filesystem.sh@37 -- # kill -0 72155 00:33:27.483 12:55:46 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:33:27.483 12:55:46 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:33:27.483 12:55:46 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:33:27.483 12:55:46 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:33:27.483 ************************************ 00:33:27.483 END TEST filesystem_ext4 00:33:27.483 ************************************ 00:33:27.483 00:33:27.483 real 0m0.326s 00:33:27.483 user 0m0.023s 00:33:27.483 sys 0m0.055s 00:33:27.483 12:55:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:27.483 12:55:46 -- common/autotest_common.sh@10 -- # set +x 00:33:27.741 12:55:46 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:33:27.742 12:55:46 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:33:27.742 12:55:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:27.742 12:55:46 -- common/autotest_common.sh@10 -- # set +x 00:33:27.742 ************************************ 00:33:27.742 START TEST filesystem_btrfs 00:33:27.742 ************************************ 00:33:27.742 12:55:46 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create btrfs nvme0n1 00:33:27.742 12:55:46 -- target/filesystem.sh@18 -- # fstype=btrfs 00:33:27.742 12:55:46 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:33:27.742 12:55:46 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:33:27.742 12:55:46 -- common/autotest_common.sh@902 -- # local fstype=btrfs 00:33:27.742 12:55:46 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:33:27.742 12:55:46 -- common/autotest_common.sh@904 -- # local i=0 00:33:27.742 12:55:46 -- common/autotest_common.sh@905 -- # local force 00:33:27.742 12:55:46 -- common/autotest_common.sh@907 -- # '[' btrfs = ext4 ']' 00:33:27.742 12:55:46 -- common/autotest_common.sh@910 -- # force=-f 00:33:27.742 12:55:46 -- common/autotest_common.sh@913 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:33:27.742 btrfs-progs v6.6.2 00:33:27.742 See https://btrfs.readthedocs.io for more information. 00:33:27.742 00:33:27.742 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:33:27.742 NOTE: several default settings have changed in version 5.15, please make sure 00:33:27.742 this does not affect your deployments: 00:33:27.742 - DUP for metadata (-m dup) 00:33:27.742 - enabled no-holes (-O no-holes) 00:33:27.742 - enabled free-space-tree (-R free-space-tree) 00:33:27.742 00:33:27.742 Label: (null) 00:33:27.742 UUID: b5c4b8f5-ffae-48eb-8e8d-5478eccc8f33 00:33:27.742 Node size: 16384 00:33:27.742 Sector size: 4096 00:33:27.742 Filesystem size: 510.00MiB 00:33:27.742 Block group profiles: 00:33:27.742 Data: single 8.00MiB 00:33:27.742 Metadata: DUP 32.00MiB 00:33:27.742 System: DUP 8.00MiB 00:33:27.742 SSD detected: yes 00:33:27.742 Zoned device: no 00:33:27.742 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:33:27.742 Runtime features: free-space-tree 00:33:27.742 Checksum: crc32c 00:33:27.742 Number of devices: 1 00:33:27.742 Devices: 00:33:27.742 ID SIZE PATH 00:33:27.742 1 510.00MiB /dev/nvme0n1p1 00:33:27.742 00:33:27.742 12:55:47 -- common/autotest_common.sh@921 -- # return 0 00:33:27.742 12:55:47 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:33:28.000 12:55:47 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:33:28.000 12:55:47 -- target/filesystem.sh@25 -- # sync 00:33:28.000 12:55:47 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:33:28.000 12:55:47 -- target/filesystem.sh@27 -- # sync 00:33:28.000 12:55:47 -- target/filesystem.sh@29 -- # i=0 00:33:28.001 12:55:47 -- target/filesystem.sh@30 -- # umount /mnt/device 00:33:28.001 12:55:47 -- target/filesystem.sh@37 -- # kill -0 72155 00:33:28.001 12:55:47 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:33:28.001 12:55:47 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:33:28.001 12:55:47 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:33:28.001 12:55:47 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:33:28.001 ************************************ 00:33:28.001 END TEST filesystem_btrfs 00:33:28.001 ************************************ 00:33:28.001 00:33:28.001 real 0m0.274s 00:33:28.001 user 0m0.021s 00:33:28.001 sys 0m0.063s 00:33:28.001 12:55:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:28.001 12:55:47 -- common/autotest_common.sh@10 -- # set +x 00:33:28.001 12:55:47 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:33:28.001 12:55:47 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:33:28.001 12:55:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:28.001 12:55:47 -- common/autotest_common.sh@10 -- # set +x 00:33:28.001 ************************************ 00:33:28.001 START TEST filesystem_xfs 00:33:28.001 ************************************ 00:33:28.001 12:55:47 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create xfs nvme0n1 00:33:28.001 12:55:47 -- target/filesystem.sh@18 -- # fstype=xfs 00:33:28.001 12:55:47 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:33:28.001 12:55:47 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:33:28.001 12:55:47 -- common/autotest_common.sh@902 -- # local fstype=xfs 00:33:28.001 12:55:47 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:33:28.001 12:55:47 -- common/autotest_common.sh@904 -- # local i=0 00:33:28.001 12:55:47 -- common/autotest_common.sh@905 -- # local force 00:33:28.001 12:55:47 -- common/autotest_common.sh@907 -- # '[' xfs = ext4 ']' 00:33:28.001 12:55:47 -- common/autotest_common.sh@910 -- # force=-f 00:33:28.001 12:55:47 -- common/autotest_common.sh@913 -- # mkfs.xfs -f /dev/nvme0n1p1 00:33:28.001 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:33:28.001 = sectsz=512 attr=2, projid32bit=1 00:33:28.001 = crc=1 finobt=1, sparse=1, rmapbt=0 00:33:28.001 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:33:28.001 data = bsize=4096 blocks=130560, imaxpct=25 00:33:28.001 = sunit=0 swidth=0 blks 00:33:28.001 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:33:28.001 log =internal log bsize=4096 blocks=16384, version=2 00:33:28.001 = sectsz=512 sunit=0 blks, lazy-count=1 00:33:28.001 realtime =none extsz=4096 blocks=0, rtextents=0 00:33:28.935 Discarding blocks...Done. 00:33:28.935 12:55:48 -- common/autotest_common.sh@921 -- # return 0 00:33:28.935 12:55:48 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:33:31.520 12:55:50 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:33:31.520 12:55:50 -- target/filesystem.sh@25 -- # sync 00:33:31.520 12:55:50 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:33:31.520 12:55:50 -- target/filesystem.sh@27 -- # sync 00:33:31.520 12:55:50 -- target/filesystem.sh@29 -- # i=0 00:33:31.520 12:55:50 -- target/filesystem.sh@30 -- # umount /mnt/device 00:33:31.520 12:55:50 -- target/filesystem.sh@37 -- # kill -0 72155 00:33:31.520 12:55:50 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:33:31.520 12:55:50 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:33:31.520 12:55:50 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:33:31.520 12:55:50 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:33:31.520 00:33:31.520 real 0m3.160s 00:33:31.520 user 0m0.021s 00:33:31.520 sys 0m0.059s 00:33:31.520 12:55:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:31.520 ************************************ 00:33:31.520 END TEST filesystem_xfs 00:33:31.520 ************************************ 00:33:31.520 12:55:50 -- common/autotest_common.sh@10 -- # set +x 00:33:31.520 12:55:50 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:33:31.520 12:55:50 -- target/filesystem.sh@93 -- # sync 00:33:31.520 12:55:50 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:33:31.520 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:33:31.520 12:55:50 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:33:31.520 12:55:50 -- common/autotest_common.sh@1198 -- # local i=0 00:33:31.520 12:55:50 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:33:31.520 12:55:50 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:31.520 12:55:50 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:33:31.520 12:55:50 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:31.520 12:55:50 -- common/autotest_common.sh@1210 -- # return 0 00:33:31.520 12:55:50 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:31.520 12:55:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:31.520 12:55:50 -- common/autotest_common.sh@10 -- # set +x 00:33:31.520 12:55:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:31.520 12:55:50 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:33:31.520 12:55:50 -- target/filesystem.sh@101 -- # killprocess 72155 00:33:31.520 12:55:50 -- common/autotest_common.sh@926 -- # '[' -z 72155 ']' 00:33:31.520 12:55:50 -- common/autotest_common.sh@930 -- # kill -0 72155 00:33:31.520 12:55:50 -- common/autotest_common.sh@931 -- # uname 00:33:31.520 12:55:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:33:31.520 12:55:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 72155 00:33:31.520 12:55:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:33:31.520 12:55:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:33:31.520 killing process with pid 72155 00:33:31.520 12:55:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 72155' 00:33:31.520 12:55:50 -- common/autotest_common.sh@945 -- # kill 72155 00:33:31.520 12:55:50 -- common/autotest_common.sh@950 -- # wait 72155 00:33:31.779 12:55:51 -- target/filesystem.sh@102 -- # nvmfpid= 00:33:31.779 00:33:31.779 real 0m9.168s 00:33:31.779 user 0m34.773s 00:33:31.779 sys 0m1.493s 00:33:31.779 12:55:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:31.779 12:55:51 -- common/autotest_common.sh@10 -- # set +x 00:33:31.779 ************************************ 00:33:31.779 END TEST nvmf_filesystem_no_in_capsule 00:33:31.779 ************************************ 00:33:31.779 12:55:51 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:33:31.779 12:55:51 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:33:31.779 12:55:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:31.779 12:55:51 -- common/autotest_common.sh@10 -- # set +x 00:33:31.779 ************************************ 00:33:31.779 START TEST nvmf_filesystem_in_capsule 00:33:31.779 ************************************ 00:33:31.779 12:55:51 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_part 4096 00:33:31.779 12:55:51 -- target/filesystem.sh@47 -- # in_capsule=4096 00:33:31.779 12:55:51 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:33:31.779 12:55:51 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:33:31.779 12:55:51 -- common/autotest_common.sh@712 -- # xtrace_disable 00:33:31.779 12:55:51 -- common/autotest_common.sh@10 -- # set +x 00:33:31.779 12:55:51 -- nvmf/common.sh@469 -- # nvmfpid=72463 00:33:31.779 12:55:51 -- nvmf/common.sh@470 -- # waitforlisten 72463 00:33:31.779 12:55:51 -- common/autotest_common.sh@819 -- # '[' -z 72463 ']' 00:33:31.779 12:55:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:31.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:31.779 12:55:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:33:31.779 12:55:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:31.779 12:55:51 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:33:31.779 12:55:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:33:31.779 12:55:51 -- common/autotest_common.sh@10 -- # set +x 00:33:31.779 [2024-07-22 12:55:51.112337] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:33:31.779 [2024-07-22 12:55:51.112427] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:32.037 [2024-07-22 12:55:51.249346] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:32.037 [2024-07-22 12:55:51.350732] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:33:32.037 [2024-07-22 12:55:51.350884] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:32.037 [2024-07-22 12:55:51.350898] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:32.037 [2024-07-22 12:55:51.350907] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:32.037 [2024-07-22 12:55:51.351069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:32.037 [2024-07-22 12:55:51.351471] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:33:32.037 [2024-07-22 12:55:51.351609] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:33:32.037 [2024-07-22 12:55:51.351675] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:32.971 12:55:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:33:32.971 12:55:52 -- common/autotest_common.sh@852 -- # return 0 00:33:32.971 12:55:52 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:33:32.971 12:55:52 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:32.971 12:55:52 -- common/autotest_common.sh@10 -- # set +x 00:33:32.971 12:55:52 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:32.971 12:55:52 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:33:32.971 12:55:52 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:33:32.971 12:55:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:32.971 12:55:52 -- common/autotest_common.sh@10 -- # set +x 00:33:32.971 [2024-07-22 12:55:52.114774] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:32.971 12:55:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:32.971 12:55:52 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:33:32.971 12:55:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:32.971 12:55:52 -- common/autotest_common.sh@10 -- # set +x 00:33:32.971 Malloc1 00:33:32.971 12:55:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:32.971 12:55:52 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:33:32.971 12:55:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:32.971 12:55:52 -- common/autotest_common.sh@10 -- # set +x 00:33:32.971 12:55:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:32.971 12:55:52 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:33:32.971 12:55:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:32.971 12:55:52 -- common/autotest_common.sh@10 -- # set +x 00:33:32.971 12:55:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:32.971 12:55:52 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:32.971 12:55:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:32.971 12:55:52 -- common/autotest_common.sh@10 -- # set +x 00:33:32.971 [2024-07-22 12:55:52.304650] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:32.971 12:55:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:32.971 12:55:52 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:33:32.971 12:55:52 -- common/autotest_common.sh@1357 -- # local bdev_name=Malloc1 00:33:32.971 12:55:52 -- common/autotest_common.sh@1358 -- # local bdev_info 00:33:32.971 12:55:52 -- common/autotest_common.sh@1359 -- # local bs 00:33:32.971 12:55:52 -- common/autotest_common.sh@1360 -- # local nb 00:33:32.971 12:55:52 -- common/autotest_common.sh@1361 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:33:32.971 12:55:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:32.971 12:55:52 -- common/autotest_common.sh@10 -- # set +x 00:33:32.971 12:55:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:32.971 12:55:52 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:33:32.971 { 00:33:32.971 "aliases": [ 00:33:32.971 "eb5323f6-d6c1-4bd2-a19a-7363bdcdda35" 00:33:32.971 ], 00:33:32.971 "assigned_rate_limits": { 00:33:32.971 "r_mbytes_per_sec": 0, 00:33:32.971 "rw_ios_per_sec": 0, 00:33:32.971 "rw_mbytes_per_sec": 0, 00:33:32.971 "w_mbytes_per_sec": 0 00:33:32.971 }, 00:33:32.971 "block_size": 512, 00:33:32.971 "claim_type": "exclusive_write", 00:33:32.971 "claimed": true, 00:33:32.971 "driver_specific": {}, 00:33:32.971 "memory_domains": [ 00:33:32.971 { 00:33:32.971 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:32.971 "dma_device_type": 2 00:33:32.971 } 00:33:32.971 ], 00:33:32.971 "name": "Malloc1", 00:33:32.971 "num_blocks": 1048576, 00:33:32.971 "product_name": "Malloc disk", 00:33:32.971 "supported_io_types": { 00:33:32.971 "abort": true, 00:33:32.971 "compare": false, 00:33:32.971 "compare_and_write": false, 00:33:32.971 "flush": true, 00:33:32.971 "nvme_admin": false, 00:33:32.972 "nvme_io": false, 00:33:32.972 "read": true, 00:33:32.972 "reset": true, 00:33:32.972 "unmap": true, 00:33:32.972 "write": true, 00:33:32.972 "write_zeroes": true 00:33:32.972 }, 00:33:32.972 "uuid": "eb5323f6-d6c1-4bd2-a19a-7363bdcdda35", 00:33:32.972 "zoned": false 00:33:32.972 } 00:33:32.972 ]' 00:33:32.972 12:55:52 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:33:32.972 12:55:52 -- common/autotest_common.sh@1362 -- # bs=512 00:33:32.972 12:55:52 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:33:33.230 12:55:52 -- common/autotest_common.sh@1363 -- # nb=1048576 00:33:33.230 12:55:52 -- common/autotest_common.sh@1366 -- # bdev_size=512 00:33:33.230 12:55:52 -- common/autotest_common.sh@1367 -- # echo 512 00:33:33.230 12:55:52 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:33:33.230 12:55:52 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:864ae4a3-cd96-439b-8ef2-6dab4d992115 --hostid=864ae4a3-cd96-439b-8ef2-6dab4d992115 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:33:33.230 12:55:52 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:33:33.230 12:55:52 -- common/autotest_common.sh@1177 -- # local i=0 00:33:33.230 12:55:52 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:33:33.230 12:55:52 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:33:33.230 12:55:52 -- common/autotest_common.sh@1184 -- # sleep 2 00:33:35.759 12:55:54 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:33:35.759 12:55:54 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:33:35.759 12:55:54 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:33:35.759 12:55:54 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:33:35.759 12:55:54 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:33:35.759 12:55:54 -- common/autotest_common.sh@1187 -- # return 0 00:33:35.759 12:55:54 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:33:35.759 12:55:54 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:33:35.759 12:55:54 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:33:35.759 12:55:54 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:33:35.759 12:55:54 -- setup/common.sh@76 -- # local dev=nvme0n1 00:33:35.759 12:55:54 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:33:35.759 12:55:54 -- setup/common.sh@80 -- # echo 536870912 00:33:35.759 12:55:54 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:33:35.759 12:55:54 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:33:35.759 12:55:54 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:33:35.759 12:55:54 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:33:35.759 12:55:54 -- target/filesystem.sh@69 -- # partprobe 00:33:35.759 12:55:54 -- target/filesystem.sh@70 -- # sleep 1 00:33:36.695 12:55:55 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:33:36.695 12:55:55 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:33:36.695 12:55:55 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:33:36.695 12:55:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:36.695 12:55:55 -- common/autotest_common.sh@10 -- # set +x 00:33:36.695 ************************************ 00:33:36.695 START TEST filesystem_in_capsule_ext4 00:33:36.695 ************************************ 00:33:36.695 12:55:55 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create ext4 nvme0n1 00:33:36.695 12:55:55 -- target/filesystem.sh@18 -- # fstype=ext4 00:33:36.695 12:55:55 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:33:36.695 12:55:55 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:33:36.695 12:55:55 -- common/autotest_common.sh@902 -- # local fstype=ext4 00:33:36.695 12:55:55 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:33:36.695 12:55:55 -- common/autotest_common.sh@904 -- # local i=0 00:33:36.695 12:55:55 -- common/autotest_common.sh@905 -- # local force 00:33:36.695 12:55:55 -- common/autotest_common.sh@907 -- # '[' ext4 = ext4 ']' 00:33:36.695 12:55:55 -- common/autotest_common.sh@908 -- # force=-F 00:33:36.695 12:55:55 -- common/autotest_common.sh@913 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:33:36.695 mke2fs 1.46.5 (30-Dec-2021) 00:33:36.695 Discarding device blocks: 0/522240 done 00:33:36.695 Creating filesystem with 522240 1k blocks and 130560 inodes 00:33:36.695 Filesystem UUID: 4b93ff1e-3a87-459e-a86a-d06f78f90639 00:33:36.695 Superblock backups stored on blocks: 00:33:36.695 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:33:36.695 00:33:36.695 Allocating group tables: 0/64 done 00:33:36.695 Writing inode tables: 0/64 done 00:33:36.695 Creating journal (8192 blocks): done 00:33:36.695 Writing superblocks and filesystem accounting information: 0/64 done 00:33:36.695 00:33:36.695 12:55:55 -- common/autotest_common.sh@921 -- # return 0 00:33:36.695 12:55:55 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:33:36.695 12:55:56 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:33:36.695 12:55:56 -- target/filesystem.sh@25 -- # sync 00:33:36.954 12:55:56 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:33:36.954 12:55:56 -- target/filesystem.sh@27 -- # sync 00:33:36.954 12:55:56 -- target/filesystem.sh@29 -- # i=0 00:33:36.954 12:55:56 -- target/filesystem.sh@30 -- # umount /mnt/device 00:33:36.954 12:55:56 -- target/filesystem.sh@37 -- # kill -0 72463 00:33:36.954 12:55:56 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:33:36.954 12:55:56 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:33:36.954 12:55:56 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:33:36.954 12:55:56 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:33:36.954 ************************************ 00:33:36.954 END TEST filesystem_in_capsule_ext4 00:33:36.954 ************************************ 00:33:36.954 00:33:36.954 real 0m0.400s 00:33:36.954 user 0m0.029s 00:33:36.954 sys 0m0.053s 00:33:36.954 12:55:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:36.954 12:55:56 -- common/autotest_common.sh@10 -- # set +x 00:33:36.954 12:55:56 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:33:36.954 12:55:56 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:33:36.954 12:55:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:36.954 12:55:56 -- common/autotest_common.sh@10 -- # set +x 00:33:36.954 ************************************ 00:33:36.954 START TEST filesystem_in_capsule_btrfs 00:33:36.954 ************************************ 00:33:36.954 12:55:56 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create btrfs nvme0n1 00:33:36.954 12:55:56 -- target/filesystem.sh@18 -- # fstype=btrfs 00:33:36.954 12:55:56 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:33:36.954 12:55:56 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:33:36.954 12:55:56 -- common/autotest_common.sh@902 -- # local fstype=btrfs 00:33:36.954 12:55:56 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:33:36.954 12:55:56 -- common/autotest_common.sh@904 -- # local i=0 00:33:36.954 12:55:56 -- common/autotest_common.sh@905 -- # local force 00:33:36.954 12:55:56 -- common/autotest_common.sh@907 -- # '[' btrfs = ext4 ']' 00:33:36.954 12:55:56 -- common/autotest_common.sh@910 -- # force=-f 00:33:36.954 12:55:56 -- common/autotest_common.sh@913 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:33:36.954 btrfs-progs v6.6.2 00:33:36.954 See https://btrfs.readthedocs.io for more information. 00:33:36.954 00:33:36.954 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:33:36.954 NOTE: several default settings have changed in version 5.15, please make sure 00:33:36.954 this does not affect your deployments: 00:33:36.954 - DUP for metadata (-m dup) 00:33:36.954 - enabled no-holes (-O no-holes) 00:33:36.954 - enabled free-space-tree (-R free-space-tree) 00:33:36.954 00:33:36.954 Label: (null) 00:33:36.954 UUID: c2d74c92-f630-47e2-9fc1-2c2b93197251 00:33:36.954 Node size: 16384 00:33:36.954 Sector size: 4096 00:33:36.954 Filesystem size: 510.00MiB 00:33:36.954 Block group profiles: 00:33:36.954 Data: single 8.00MiB 00:33:36.954 Metadata: DUP 32.00MiB 00:33:36.954 System: DUP 8.00MiB 00:33:36.954 SSD detected: yes 00:33:36.954 Zoned device: no 00:33:36.954 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:33:36.954 Runtime features: free-space-tree 00:33:36.954 Checksum: crc32c 00:33:36.954 Number of devices: 1 00:33:36.954 Devices: 00:33:36.954 ID SIZE PATH 00:33:36.954 1 510.00MiB /dev/nvme0n1p1 00:33:36.954 00:33:36.954 12:55:56 -- common/autotest_common.sh@921 -- # return 0 00:33:36.954 12:55:56 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:33:37.213 12:55:56 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:33:37.213 12:55:56 -- target/filesystem.sh@25 -- # sync 00:33:37.213 12:55:56 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:33:37.213 12:55:56 -- target/filesystem.sh@27 -- # sync 00:33:37.213 12:55:56 -- target/filesystem.sh@29 -- # i=0 00:33:37.213 12:55:56 -- target/filesystem.sh@30 -- # umount /mnt/device 00:33:37.213 12:55:56 -- target/filesystem.sh@37 -- # kill -0 72463 00:33:37.213 12:55:56 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:33:37.213 12:55:56 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:33:37.213 12:55:56 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:33:37.213 12:55:56 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:33:37.213 ************************************ 00:33:37.213 END TEST filesystem_in_capsule_btrfs 00:33:37.213 ************************************ 00:33:37.213 00:33:37.213 real 0m0.224s 00:33:37.213 user 0m0.020s 00:33:37.213 sys 0m0.068s 00:33:37.213 12:55:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:37.213 12:55:56 -- common/autotest_common.sh@10 -- # set +x 00:33:37.213 12:55:56 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:33:37.213 12:55:56 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:33:37.213 12:55:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:37.213 12:55:56 -- common/autotest_common.sh@10 -- # set +x 00:33:37.213 ************************************ 00:33:37.213 START TEST filesystem_in_capsule_xfs 00:33:37.213 ************************************ 00:33:37.213 12:55:56 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create xfs nvme0n1 00:33:37.213 12:55:56 -- target/filesystem.sh@18 -- # fstype=xfs 00:33:37.213 12:55:56 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:33:37.213 12:55:56 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:33:37.213 12:55:56 -- common/autotest_common.sh@902 -- # local fstype=xfs 00:33:37.213 12:55:56 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:33:37.213 12:55:56 -- common/autotest_common.sh@904 -- # local i=0 00:33:37.213 12:55:56 -- common/autotest_common.sh@905 -- # local force 00:33:37.213 12:55:56 -- common/autotest_common.sh@907 -- # '[' xfs = ext4 ']' 00:33:37.213 12:55:56 -- common/autotest_common.sh@910 -- # force=-f 00:33:37.213 12:55:56 -- common/autotest_common.sh@913 -- # mkfs.xfs -f /dev/nvme0n1p1 00:33:37.213 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:33:37.213 = sectsz=512 attr=2, projid32bit=1 00:33:37.213 = crc=1 finobt=1, sparse=1, rmapbt=0 00:33:37.213 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:33:37.213 data = bsize=4096 blocks=130560, imaxpct=25 00:33:37.213 = sunit=0 swidth=0 blks 00:33:37.213 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:33:37.213 log =internal log bsize=4096 blocks=16384, version=2 00:33:37.213 = sectsz=512 sunit=0 blks, lazy-count=1 00:33:37.213 realtime =none extsz=4096 blocks=0, rtextents=0 00:33:38.198 Discarding blocks...Done. 00:33:38.198 12:55:57 -- common/autotest_common.sh@921 -- # return 0 00:33:38.198 12:55:57 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:33:40.111 12:55:59 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:33:40.111 12:55:59 -- target/filesystem.sh@25 -- # sync 00:33:40.111 12:55:59 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:33:40.111 12:55:59 -- target/filesystem.sh@27 -- # sync 00:33:40.111 12:55:59 -- target/filesystem.sh@29 -- # i=0 00:33:40.111 12:55:59 -- target/filesystem.sh@30 -- # umount /mnt/device 00:33:40.111 12:55:59 -- target/filesystem.sh@37 -- # kill -0 72463 00:33:40.111 12:55:59 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:33:40.111 12:55:59 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:33:40.111 12:55:59 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:33:40.111 12:55:59 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:33:40.111 ************************************ 00:33:40.111 END TEST filesystem_in_capsule_xfs 00:33:40.111 ************************************ 00:33:40.111 00:33:40.111 real 0m2.608s 00:33:40.111 user 0m0.024s 00:33:40.111 sys 0m0.051s 00:33:40.111 12:55:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:40.111 12:55:59 -- common/autotest_common.sh@10 -- # set +x 00:33:40.111 12:55:59 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:33:40.111 12:55:59 -- target/filesystem.sh@93 -- # sync 00:33:40.111 12:55:59 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:33:40.111 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:33:40.111 12:55:59 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:33:40.111 12:55:59 -- common/autotest_common.sh@1198 -- # local i=0 00:33:40.111 12:55:59 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:33:40.111 12:55:59 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:40.111 12:55:59 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:33:40.111 12:55:59 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:40.111 12:55:59 -- common/autotest_common.sh@1210 -- # return 0 00:33:40.111 12:55:59 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:40.111 12:55:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:40.111 12:55:59 -- common/autotest_common.sh@10 -- # set +x 00:33:40.111 12:55:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:40.111 12:55:59 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:33:40.111 12:55:59 -- target/filesystem.sh@101 -- # killprocess 72463 00:33:40.111 12:55:59 -- common/autotest_common.sh@926 -- # '[' -z 72463 ']' 00:33:40.111 12:55:59 -- common/autotest_common.sh@930 -- # kill -0 72463 00:33:40.111 12:55:59 -- common/autotest_common.sh@931 -- # uname 00:33:40.111 12:55:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:33:40.111 12:55:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 72463 00:33:40.111 killing process with pid 72463 00:33:40.111 12:55:59 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:33:40.111 12:55:59 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:33:40.111 12:55:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 72463' 00:33:40.111 12:55:59 -- common/autotest_common.sh@945 -- # kill 72463 00:33:40.111 12:55:59 -- common/autotest_common.sh@950 -- # wait 72463 00:33:40.376 12:55:59 -- target/filesystem.sh@102 -- # nvmfpid= 00:33:40.376 00:33:40.376 real 0m8.639s 00:33:40.376 user 0m32.861s 00:33:40.376 sys 0m1.400s 00:33:40.376 12:55:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:40.376 12:55:59 -- common/autotest_common.sh@10 -- # set +x 00:33:40.376 ************************************ 00:33:40.376 END TEST nvmf_filesystem_in_capsule 00:33:40.376 ************************************ 00:33:40.376 12:55:59 -- target/filesystem.sh@108 -- # nvmftestfini 00:33:40.376 12:55:59 -- nvmf/common.sh@476 -- # nvmfcleanup 00:33:40.376 12:55:59 -- nvmf/common.sh@116 -- # sync 00:33:40.376 12:55:59 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:33:40.376 12:55:59 -- nvmf/common.sh@119 -- # set +e 00:33:40.376 12:55:59 -- nvmf/common.sh@120 -- # for i in {1..20} 00:33:40.376 12:55:59 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:33:40.376 rmmod nvme_tcp 00:33:40.635 rmmod nvme_fabrics 00:33:40.635 rmmod nvme_keyring 00:33:40.635 12:55:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:33:40.635 12:55:59 -- nvmf/common.sh@123 -- # set -e 00:33:40.635 12:55:59 -- nvmf/common.sh@124 -- # return 0 00:33:40.635 12:55:59 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:33:40.635 12:55:59 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:33:40.635 12:55:59 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:33:40.635 12:55:59 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:33:40.635 12:55:59 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:40.635 12:55:59 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:33:40.635 12:55:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:40.635 12:55:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:40.635 12:55:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:40.635 12:55:59 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:33:40.635 00:33:40.635 real 0m18.626s 00:33:40.635 user 1m7.850s 00:33:40.635 sys 0m3.265s 00:33:40.635 12:55:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:40.635 12:55:59 -- common/autotest_common.sh@10 -- # set +x 00:33:40.635 ************************************ 00:33:40.635 END TEST nvmf_filesystem 00:33:40.635 ************************************ 00:33:40.635 12:55:59 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:33:40.635 12:55:59 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:33:40.635 12:55:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:40.635 12:55:59 -- common/autotest_common.sh@10 -- # set +x 00:33:40.635 ************************************ 00:33:40.635 START TEST nvmf_discovery 00:33:40.635 ************************************ 00:33:40.635 12:55:59 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:33:40.635 * Looking for test storage... 00:33:40.635 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:33:40.635 12:55:59 -- target/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:33:40.635 12:55:59 -- nvmf/common.sh@7 -- # uname -s 00:33:40.635 12:56:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:40.635 12:56:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:40.635 12:56:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:40.636 12:56:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:40.636 12:56:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:40.636 12:56:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:40.636 12:56:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:40.636 12:56:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:40.636 12:56:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:40.636 12:56:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:40.636 12:56:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:864ae4a3-cd96-439b-8ef2-6dab4d992115 00:33:40.636 12:56:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=864ae4a3-cd96-439b-8ef2-6dab4d992115 00:33:40.636 12:56:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:40.636 12:56:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:40.636 12:56:00 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:33:40.636 12:56:00 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:40.636 12:56:00 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:40.636 12:56:00 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:40.636 12:56:00 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:40.636 12:56:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:40.636 12:56:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:40.636 12:56:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:40.636 12:56:00 -- paths/export.sh@5 -- # export PATH 00:33:40.636 12:56:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:40.636 12:56:00 -- nvmf/common.sh@46 -- # : 0 00:33:40.636 12:56:00 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:33:40.636 12:56:00 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:33:40.636 12:56:00 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:33:40.636 12:56:00 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:40.636 12:56:00 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:40.636 12:56:00 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:33:40.636 12:56:00 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:33:40.636 12:56:00 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:33:40.636 12:56:00 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:33:40.636 12:56:00 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:33:40.636 12:56:00 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:33:40.636 12:56:00 -- target/discovery.sh@15 -- # hash nvme 00:33:40.636 12:56:00 -- target/discovery.sh@20 -- # nvmftestinit 00:33:40.636 12:56:00 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:33:40.636 12:56:00 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:40.636 12:56:00 -- nvmf/common.sh@436 -- # prepare_net_devs 00:33:40.636 12:56:00 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:33:40.636 12:56:00 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:33:40.636 12:56:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:40.636 12:56:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:40.636 12:56:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:40.636 12:56:00 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:33:40.636 12:56:00 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:33:40.636 12:56:00 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:33:40.636 12:56:00 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:33:40.636 12:56:00 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:33:40.636 12:56:00 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:33:40.636 12:56:00 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:40.636 12:56:00 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:40.636 12:56:00 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:33:40.636 12:56:00 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:33:40.636 12:56:00 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:33:40.636 12:56:00 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:33:40.636 12:56:00 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:33:40.636 12:56:00 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:40.636 12:56:00 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:33:40.636 12:56:00 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:33:40.636 12:56:00 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:33:40.636 12:56:00 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:33:40.636 12:56:00 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:33:40.636 12:56:00 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:33:40.895 Cannot find device "nvmf_tgt_br" 00:33:40.895 12:56:00 -- nvmf/common.sh@154 -- # true 00:33:40.895 12:56:00 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:33:40.895 Cannot find device "nvmf_tgt_br2" 00:33:40.895 12:56:00 -- nvmf/common.sh@155 -- # true 00:33:40.895 12:56:00 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:33:40.895 12:56:00 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:33:40.895 Cannot find device "nvmf_tgt_br" 00:33:40.895 12:56:00 -- nvmf/common.sh@157 -- # true 00:33:40.895 12:56:00 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:33:40.895 Cannot find device "nvmf_tgt_br2" 00:33:40.895 12:56:00 -- nvmf/common.sh@158 -- # true 00:33:40.895 12:56:00 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:33:40.895 12:56:00 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:33:40.895 12:56:00 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:33:40.895 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:40.895 12:56:00 -- nvmf/common.sh@161 -- # true 00:33:40.895 12:56:00 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:33:40.895 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:40.895 12:56:00 -- nvmf/common.sh@162 -- # true 00:33:40.895 12:56:00 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:33:40.895 12:56:00 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:33:40.895 12:56:00 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:33:40.895 12:56:00 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:33:40.895 12:56:00 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:33:40.895 12:56:00 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:33:40.895 12:56:00 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:33:40.895 12:56:00 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:33:40.895 12:56:00 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:33:40.895 12:56:00 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:33:40.895 12:56:00 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:33:40.895 12:56:00 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:33:40.895 12:56:00 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:33:40.896 12:56:00 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:33:40.896 12:56:00 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:33:40.896 12:56:00 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:33:40.896 12:56:00 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:33:40.896 12:56:00 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:33:41.155 12:56:00 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:33:41.155 12:56:00 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:33:41.155 12:56:00 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:33:41.155 12:56:00 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:33:41.155 12:56:00 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:33:41.155 12:56:00 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:33:41.155 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:41.155 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:33:41.155 00:33:41.155 --- 10.0.0.2 ping statistics --- 00:33:41.155 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:41.155 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:33:41.155 12:56:00 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:33:41.155 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:33:41.155 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:33:41.155 00:33:41.155 --- 10.0.0.3 ping statistics --- 00:33:41.155 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:41.155 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:33:41.155 12:56:00 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:33:41.155 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:41.155 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:33:41.155 00:33:41.155 --- 10.0.0.1 ping statistics --- 00:33:41.155 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:41.155 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:33:41.155 12:56:00 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:41.155 12:56:00 -- nvmf/common.sh@421 -- # return 0 00:33:41.155 12:56:00 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:33:41.155 12:56:00 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:41.155 12:56:00 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:33:41.155 12:56:00 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:33:41.155 12:56:00 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:41.155 12:56:00 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:33:41.155 12:56:00 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:33:41.155 12:56:00 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:33:41.155 12:56:00 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:33:41.155 12:56:00 -- common/autotest_common.sh@712 -- # xtrace_disable 00:33:41.155 12:56:00 -- common/autotest_common.sh@10 -- # set +x 00:33:41.155 12:56:00 -- nvmf/common.sh@469 -- # nvmfpid=72923 00:33:41.155 12:56:00 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:33:41.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:41.155 12:56:00 -- nvmf/common.sh@470 -- # waitforlisten 72923 00:33:41.155 12:56:00 -- common/autotest_common.sh@819 -- # '[' -z 72923 ']' 00:33:41.155 12:56:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:41.155 12:56:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:33:41.155 12:56:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:41.155 12:56:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:33:41.155 12:56:00 -- common/autotest_common.sh@10 -- # set +x 00:33:41.155 [2024-07-22 12:56:00.462294] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:33:41.155 [2024-07-22 12:56:00.462397] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:41.414 [2024-07-22 12:56:00.605061] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:41.414 [2024-07-22 12:56:00.710259] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:33:41.414 [2024-07-22 12:56:00.710678] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:41.414 [2024-07-22 12:56:00.710819] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:41.414 [2024-07-22 12:56:00.710976] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:41.414 [2024-07-22 12:56:00.711291] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:41.414 [2024-07-22 12:56:00.711373] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:33:41.414 [2024-07-22 12:56:00.711516] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:33:41.414 [2024-07-22 12:56:00.711524] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:42.351 12:56:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:33:42.351 12:56:01 -- common/autotest_common.sh@852 -- # return 0 00:33:42.351 12:56:01 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:33:42.351 12:56:01 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:42.351 12:56:01 -- common/autotest_common.sh@10 -- # set +x 00:33:42.351 12:56:01 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:42.351 12:56:01 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:42.351 12:56:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:42.351 12:56:01 -- common/autotest_common.sh@10 -- # set +x 00:33:42.351 [2024-07-22 12:56:01.463367] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:42.351 12:56:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:42.351 12:56:01 -- target/discovery.sh@26 -- # seq 1 4 00:33:42.351 12:56:01 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:33:42.351 12:56:01 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:33:42.351 12:56:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:42.351 12:56:01 -- common/autotest_common.sh@10 -- # set +x 00:33:42.351 Null1 00:33:42.351 12:56:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:42.351 12:56:01 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:42.351 12:56:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:42.351 12:56:01 -- common/autotest_common.sh@10 -- # set +x 00:33:42.351 12:56:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:42.351 12:56:01 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:33:42.351 12:56:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:42.351 12:56:01 -- common/autotest_common.sh@10 -- # set +x 00:33:42.351 12:56:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:42.351 12:56:01 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:42.351 12:56:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:42.351 12:56:01 -- common/autotest_common.sh@10 -- # set +x 00:33:42.351 [2024-07-22 12:56:01.526163] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:42.351 12:56:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:42.351 12:56:01 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:33:42.351 12:56:01 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:33:42.351 12:56:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:42.351 12:56:01 -- common/autotest_common.sh@10 -- # set +x 00:33:42.351 Null2 00:33:42.351 12:56:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:42.351 12:56:01 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:33:42.351 12:56:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:42.351 12:56:01 -- common/autotest_common.sh@10 -- # set +x 00:33:42.351 12:56:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:42.351 12:56:01 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:33:42.351 12:56:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:42.351 12:56:01 -- common/autotest_common.sh@10 -- # set +x 00:33:42.351 12:56:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:42.351 12:56:01 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:33:42.351 12:56:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:42.351 12:56:01 -- common/autotest_common.sh@10 -- # set +x 00:33:42.351 12:56:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:42.351 12:56:01 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:33:42.351 12:56:01 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:33:42.351 12:56:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:42.351 12:56:01 -- common/autotest_common.sh@10 -- # set +x 00:33:42.351 Null3 00:33:42.351 12:56:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:42.351 12:56:01 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:33:42.351 12:56:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:42.351 12:56:01 -- common/autotest_common.sh@10 -- # set +x 00:33:42.351 12:56:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:42.351 12:56:01 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:33:42.351 12:56:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:42.351 12:56:01 -- common/autotest_common.sh@10 -- # set +x 00:33:42.351 12:56:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:42.351 12:56:01 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:33:42.351 12:56:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:42.351 12:56:01 -- common/autotest_common.sh@10 -- # set +x 00:33:42.351 12:56:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:42.351 12:56:01 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:33:42.351 12:56:01 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:33:42.351 12:56:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:42.351 12:56:01 -- common/autotest_common.sh@10 -- # set +x 00:33:42.351 Null4 00:33:42.351 12:56:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:42.351 12:56:01 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:33:42.351 12:56:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:42.351 12:56:01 -- common/autotest_common.sh@10 -- # set +x 00:33:42.351 12:56:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:42.351 12:56:01 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:33:42.351 12:56:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:42.351 12:56:01 -- common/autotest_common.sh@10 -- # set +x 00:33:42.351 12:56:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:42.351 12:56:01 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:33:42.351 12:56:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:42.351 12:56:01 -- common/autotest_common.sh@10 -- # set +x 00:33:42.351 12:56:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:42.351 12:56:01 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:42.351 12:56:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:42.351 12:56:01 -- common/autotest_common.sh@10 -- # set +x 00:33:42.351 12:56:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:42.351 12:56:01 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:33:42.351 12:56:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:42.351 12:56:01 -- common/autotest_common.sh@10 -- # set +x 00:33:42.351 12:56:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:42.351 12:56:01 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:864ae4a3-cd96-439b-8ef2-6dab4d992115 --hostid=864ae4a3-cd96-439b-8ef2-6dab4d992115 -t tcp -a 10.0.0.2 -s 4420 00:33:42.351 00:33:42.351 Discovery Log Number of Records 6, Generation counter 6 00:33:42.351 =====Discovery Log Entry 0====== 00:33:42.351 trtype: tcp 00:33:42.351 adrfam: ipv4 00:33:42.351 subtype: current discovery subsystem 00:33:42.351 treq: not required 00:33:42.351 portid: 0 00:33:42.351 trsvcid: 4420 00:33:42.351 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:33:42.351 traddr: 10.0.0.2 00:33:42.351 eflags: explicit discovery connections, duplicate discovery information 00:33:42.351 sectype: none 00:33:42.351 =====Discovery Log Entry 1====== 00:33:42.351 trtype: tcp 00:33:42.351 adrfam: ipv4 00:33:42.351 subtype: nvme subsystem 00:33:42.351 treq: not required 00:33:42.351 portid: 0 00:33:42.351 trsvcid: 4420 00:33:42.351 subnqn: nqn.2016-06.io.spdk:cnode1 00:33:42.351 traddr: 10.0.0.2 00:33:42.351 eflags: none 00:33:42.351 sectype: none 00:33:42.351 =====Discovery Log Entry 2====== 00:33:42.351 trtype: tcp 00:33:42.351 adrfam: ipv4 00:33:42.351 subtype: nvme subsystem 00:33:42.351 treq: not required 00:33:42.351 portid: 0 00:33:42.351 trsvcid: 4420 00:33:42.351 subnqn: nqn.2016-06.io.spdk:cnode2 00:33:42.351 traddr: 10.0.0.2 00:33:42.351 eflags: none 00:33:42.351 sectype: none 00:33:42.351 =====Discovery Log Entry 3====== 00:33:42.351 trtype: tcp 00:33:42.351 adrfam: ipv4 00:33:42.351 subtype: nvme subsystem 00:33:42.351 treq: not required 00:33:42.351 portid: 0 00:33:42.351 trsvcid: 4420 00:33:42.351 subnqn: nqn.2016-06.io.spdk:cnode3 00:33:42.351 traddr: 10.0.0.2 00:33:42.351 eflags: none 00:33:42.351 sectype: none 00:33:42.351 =====Discovery Log Entry 4====== 00:33:42.351 trtype: tcp 00:33:42.351 adrfam: ipv4 00:33:42.351 subtype: nvme subsystem 00:33:42.351 treq: not required 00:33:42.351 portid: 0 00:33:42.351 trsvcid: 4420 00:33:42.351 subnqn: nqn.2016-06.io.spdk:cnode4 00:33:42.351 traddr: 10.0.0.2 00:33:42.351 eflags: none 00:33:42.351 sectype: none 00:33:42.351 =====Discovery Log Entry 5====== 00:33:42.352 trtype: tcp 00:33:42.352 adrfam: ipv4 00:33:42.352 subtype: discovery subsystem referral 00:33:42.352 treq: not required 00:33:42.352 portid: 0 00:33:42.352 trsvcid: 4430 00:33:42.352 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:33:42.352 traddr: 10.0.0.2 00:33:42.352 eflags: none 00:33:42.352 sectype: none 00:33:42.352 12:56:01 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:33:42.352 Perform nvmf subsystem discovery via RPC 00:33:42.352 12:56:01 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:33:42.352 12:56:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:42.352 12:56:01 -- common/autotest_common.sh@10 -- # set +x 00:33:42.352 [2024-07-22 12:56:01.726074] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:33:42.352 [ 00:33:42.352 { 00:33:42.352 "allow_any_host": true, 00:33:42.352 "hosts": [], 00:33:42.352 "listen_addresses": [ 00:33:42.352 { 00:33:42.352 "adrfam": "IPv4", 00:33:42.352 "traddr": "10.0.0.2", 00:33:42.352 "transport": "TCP", 00:33:42.352 "trsvcid": "4420", 00:33:42.352 "trtype": "TCP" 00:33:42.352 } 00:33:42.352 ], 00:33:42.352 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:33:42.352 "subtype": "Discovery" 00:33:42.352 }, 00:33:42.352 { 00:33:42.352 "allow_any_host": true, 00:33:42.352 "hosts": [], 00:33:42.352 "listen_addresses": [ 00:33:42.352 { 00:33:42.352 "adrfam": "IPv4", 00:33:42.352 "traddr": "10.0.0.2", 00:33:42.352 "transport": "TCP", 00:33:42.352 "trsvcid": "4420", 00:33:42.352 "trtype": "TCP" 00:33:42.352 } 00:33:42.352 ], 00:33:42.352 "max_cntlid": 65519, 00:33:42.352 "max_namespaces": 32, 00:33:42.352 "min_cntlid": 1, 00:33:42.352 "model_number": "SPDK bdev Controller", 00:33:42.352 "namespaces": [ 00:33:42.352 { 00:33:42.352 "bdev_name": "Null1", 00:33:42.352 "name": "Null1", 00:33:42.352 "nguid": "411B2C9FE1374937B4F19586E9776FEF", 00:33:42.352 "nsid": 1, 00:33:42.352 "uuid": "411b2c9f-e137-4937-b4f1-9586e9776fef" 00:33:42.352 } 00:33:42.352 ], 00:33:42.352 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:33:42.352 "serial_number": "SPDK00000000000001", 00:33:42.352 "subtype": "NVMe" 00:33:42.352 }, 00:33:42.352 { 00:33:42.352 "allow_any_host": true, 00:33:42.352 "hosts": [], 00:33:42.352 "listen_addresses": [ 00:33:42.352 { 00:33:42.352 "adrfam": "IPv4", 00:33:42.352 "traddr": "10.0.0.2", 00:33:42.352 "transport": "TCP", 00:33:42.352 "trsvcid": "4420", 00:33:42.352 "trtype": "TCP" 00:33:42.352 } 00:33:42.352 ], 00:33:42.352 "max_cntlid": 65519, 00:33:42.352 "max_namespaces": 32, 00:33:42.352 "min_cntlid": 1, 00:33:42.352 "model_number": "SPDK bdev Controller", 00:33:42.352 "namespaces": [ 00:33:42.352 { 00:33:42.352 "bdev_name": "Null2", 00:33:42.352 "name": "Null2", 00:33:42.352 "nguid": "CBF8384990CB4B6984AC10230B72D7C0", 00:33:42.352 "nsid": 1, 00:33:42.352 "uuid": "cbf83849-90cb-4b69-84ac-10230b72d7c0" 00:33:42.352 } 00:33:42.352 ], 00:33:42.352 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:33:42.352 "serial_number": "SPDK00000000000002", 00:33:42.352 "subtype": "NVMe" 00:33:42.352 }, 00:33:42.352 { 00:33:42.352 "allow_any_host": true, 00:33:42.352 "hosts": [], 00:33:42.352 "listen_addresses": [ 00:33:42.352 { 00:33:42.352 "adrfam": "IPv4", 00:33:42.352 "traddr": "10.0.0.2", 00:33:42.352 "transport": "TCP", 00:33:42.352 "trsvcid": "4420", 00:33:42.352 "trtype": "TCP" 00:33:42.352 } 00:33:42.352 ], 00:33:42.352 "max_cntlid": 65519, 00:33:42.352 "max_namespaces": 32, 00:33:42.352 "min_cntlid": 1, 00:33:42.352 "model_number": "SPDK bdev Controller", 00:33:42.352 "namespaces": [ 00:33:42.352 { 00:33:42.352 "bdev_name": "Null3", 00:33:42.352 "name": "Null3", 00:33:42.352 "nguid": "386778FD96F94E4D889980E6AA6313A2", 00:33:42.352 "nsid": 1, 00:33:42.352 "uuid": "386778fd-96f9-4e4d-8899-80e6aa6313a2" 00:33:42.352 } 00:33:42.352 ], 00:33:42.352 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:33:42.352 "serial_number": "SPDK00000000000003", 00:33:42.352 "subtype": "NVMe" 00:33:42.352 }, 00:33:42.352 { 00:33:42.352 "allow_any_host": true, 00:33:42.352 "hosts": [], 00:33:42.352 "listen_addresses": [ 00:33:42.352 { 00:33:42.352 "adrfam": "IPv4", 00:33:42.352 "traddr": "10.0.0.2", 00:33:42.352 "transport": "TCP", 00:33:42.352 "trsvcid": "4420", 00:33:42.352 "trtype": "TCP" 00:33:42.352 } 00:33:42.352 ], 00:33:42.352 "max_cntlid": 65519, 00:33:42.352 "max_namespaces": 32, 00:33:42.352 "min_cntlid": 1, 00:33:42.352 "model_number": "SPDK bdev Controller", 00:33:42.352 "namespaces": [ 00:33:42.352 { 00:33:42.352 "bdev_name": "Null4", 00:33:42.352 "name": "Null4", 00:33:42.352 "nguid": "A7A3B1C5434447B5B1666EDA4513ABC5", 00:33:42.352 "nsid": 1, 00:33:42.352 "uuid": "a7a3b1c5-4344-47b5-b166-6eda4513abc5" 00:33:42.352 } 00:33:42.352 ], 00:33:42.352 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:33:42.352 "serial_number": "SPDK00000000000004", 00:33:42.352 "subtype": "NVMe" 00:33:42.352 } 00:33:42.352 ] 00:33:42.352 12:56:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:42.352 12:56:01 -- target/discovery.sh@42 -- # seq 1 4 00:33:42.352 12:56:01 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:33:42.352 12:56:01 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:42.352 12:56:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:42.352 12:56:01 -- common/autotest_common.sh@10 -- # set +x 00:33:42.352 12:56:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:42.352 12:56:01 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:33:42.352 12:56:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:42.352 12:56:01 -- common/autotest_common.sh@10 -- # set +x 00:33:42.611 12:56:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:42.611 12:56:01 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:33:42.611 12:56:01 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:33:42.611 12:56:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:42.611 12:56:01 -- common/autotest_common.sh@10 -- # set +x 00:33:42.611 12:56:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:42.611 12:56:01 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:33:42.611 12:56:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:42.611 12:56:01 -- common/autotest_common.sh@10 -- # set +x 00:33:42.611 12:56:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:42.611 12:56:01 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:33:42.611 12:56:01 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:33:42.611 12:56:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:42.611 12:56:01 -- common/autotest_common.sh@10 -- # set +x 00:33:42.611 12:56:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:42.611 12:56:01 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:33:42.611 12:56:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:42.611 12:56:01 -- common/autotest_common.sh@10 -- # set +x 00:33:42.611 12:56:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:42.612 12:56:01 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:33:42.612 12:56:01 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:33:42.612 12:56:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:42.612 12:56:01 -- common/autotest_common.sh@10 -- # set +x 00:33:42.612 12:56:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:42.612 12:56:01 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:33:42.612 12:56:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:42.612 12:56:01 -- common/autotest_common.sh@10 -- # set +x 00:33:42.612 12:56:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:42.612 12:56:01 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:33:42.612 12:56:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:42.612 12:56:01 -- common/autotest_common.sh@10 -- # set +x 00:33:42.612 12:56:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:42.612 12:56:01 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:33:42.612 12:56:01 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:33:42.612 12:56:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:42.612 12:56:01 -- common/autotest_common.sh@10 -- # set +x 00:33:42.612 12:56:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:42.612 12:56:01 -- target/discovery.sh@49 -- # check_bdevs= 00:33:42.612 12:56:01 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:33:42.612 12:56:01 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:33:42.612 12:56:01 -- target/discovery.sh@57 -- # nvmftestfini 00:33:42.612 12:56:01 -- nvmf/common.sh@476 -- # nvmfcleanup 00:33:42.612 12:56:01 -- nvmf/common.sh@116 -- # sync 00:33:42.612 12:56:01 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:33:42.612 12:56:01 -- nvmf/common.sh@119 -- # set +e 00:33:42.612 12:56:01 -- nvmf/common.sh@120 -- # for i in {1..20} 00:33:42.612 12:56:01 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:33:42.612 rmmod nvme_tcp 00:33:42.612 rmmod nvme_fabrics 00:33:42.612 rmmod nvme_keyring 00:33:42.612 12:56:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:33:42.612 12:56:01 -- nvmf/common.sh@123 -- # set -e 00:33:42.612 12:56:01 -- nvmf/common.sh@124 -- # return 0 00:33:42.612 12:56:01 -- nvmf/common.sh@477 -- # '[' -n 72923 ']' 00:33:42.612 12:56:01 -- nvmf/common.sh@478 -- # killprocess 72923 00:33:42.612 12:56:01 -- common/autotest_common.sh@926 -- # '[' -z 72923 ']' 00:33:42.612 12:56:01 -- common/autotest_common.sh@930 -- # kill -0 72923 00:33:42.612 12:56:01 -- common/autotest_common.sh@931 -- # uname 00:33:42.612 12:56:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:33:42.612 12:56:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 72923 00:33:42.612 killing process with pid 72923 00:33:42.612 12:56:01 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:33:42.612 12:56:01 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:33:42.612 12:56:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 72923' 00:33:42.612 12:56:01 -- common/autotest_common.sh@945 -- # kill 72923 00:33:42.612 [2024-07-22 12:56:01.998054] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:33:42.612 12:56:01 -- common/autotest_common.sh@950 -- # wait 72923 00:33:42.871 12:56:02 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:33:42.871 12:56:02 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:33:42.871 12:56:02 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:33:42.871 12:56:02 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:42.871 12:56:02 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:33:42.871 12:56:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:42.871 12:56:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:42.871 12:56:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:42.871 12:56:02 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:33:42.871 00:33:42.871 real 0m2.335s 00:33:42.871 user 0m6.336s 00:33:42.872 sys 0m0.597s 00:33:42.872 12:56:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:42.872 12:56:02 -- common/autotest_common.sh@10 -- # set +x 00:33:42.872 ************************************ 00:33:42.872 END TEST nvmf_discovery 00:33:42.872 ************************************ 00:33:43.131 12:56:02 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:33:43.131 12:56:02 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:33:43.131 12:56:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:43.131 12:56:02 -- common/autotest_common.sh@10 -- # set +x 00:33:43.131 ************************************ 00:33:43.131 START TEST nvmf_referrals 00:33:43.131 ************************************ 00:33:43.131 12:56:02 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:33:43.131 * Looking for test storage... 00:33:43.131 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:33:43.131 12:56:02 -- target/referrals.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:33:43.131 12:56:02 -- nvmf/common.sh@7 -- # uname -s 00:33:43.131 12:56:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:43.131 12:56:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:43.131 12:56:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:43.131 12:56:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:43.131 12:56:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:43.131 12:56:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:43.131 12:56:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:43.131 12:56:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:43.131 12:56:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:43.131 12:56:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:43.131 12:56:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:864ae4a3-cd96-439b-8ef2-6dab4d992115 00:33:43.131 12:56:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=864ae4a3-cd96-439b-8ef2-6dab4d992115 00:33:43.131 12:56:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:43.131 12:56:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:43.131 12:56:02 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:33:43.131 12:56:02 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:43.131 12:56:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:43.131 12:56:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:43.131 12:56:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:43.131 12:56:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:43.131 12:56:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:43.131 12:56:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:43.131 12:56:02 -- paths/export.sh@5 -- # export PATH 00:33:43.131 12:56:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:43.131 12:56:02 -- nvmf/common.sh@46 -- # : 0 00:33:43.131 12:56:02 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:33:43.131 12:56:02 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:33:43.131 12:56:02 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:33:43.131 12:56:02 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:43.131 12:56:02 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:43.131 12:56:02 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:33:43.131 12:56:02 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:33:43.131 12:56:02 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:33:43.131 12:56:02 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:33:43.131 12:56:02 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:33:43.131 12:56:02 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:33:43.131 12:56:02 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:33:43.131 12:56:02 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:33:43.131 12:56:02 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:33:43.131 12:56:02 -- target/referrals.sh@37 -- # nvmftestinit 00:33:43.131 12:56:02 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:33:43.131 12:56:02 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:43.131 12:56:02 -- nvmf/common.sh@436 -- # prepare_net_devs 00:33:43.131 12:56:02 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:33:43.131 12:56:02 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:33:43.131 12:56:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:43.131 12:56:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:43.131 12:56:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:43.131 12:56:02 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:33:43.131 12:56:02 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:33:43.131 12:56:02 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:33:43.131 12:56:02 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:33:43.131 12:56:02 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:33:43.132 12:56:02 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:33:43.132 12:56:02 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:43.132 12:56:02 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:43.132 12:56:02 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:33:43.132 12:56:02 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:33:43.132 12:56:02 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:33:43.132 12:56:02 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:33:43.132 12:56:02 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:33:43.132 12:56:02 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:43.132 12:56:02 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:33:43.132 12:56:02 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:33:43.132 12:56:02 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:33:43.132 12:56:02 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:33:43.132 12:56:02 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:33:43.132 12:56:02 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:33:43.132 Cannot find device "nvmf_tgt_br" 00:33:43.132 12:56:02 -- nvmf/common.sh@154 -- # true 00:33:43.132 12:56:02 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:33:43.132 Cannot find device "nvmf_tgt_br2" 00:33:43.132 12:56:02 -- nvmf/common.sh@155 -- # true 00:33:43.132 12:56:02 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:33:43.132 12:56:02 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:33:43.132 Cannot find device "nvmf_tgt_br" 00:33:43.132 12:56:02 -- nvmf/common.sh@157 -- # true 00:33:43.132 12:56:02 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:33:43.132 Cannot find device "nvmf_tgt_br2" 00:33:43.132 12:56:02 -- nvmf/common.sh@158 -- # true 00:33:43.132 12:56:02 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:33:43.132 12:56:02 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:33:43.132 12:56:02 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:33:43.391 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:43.391 12:56:02 -- nvmf/common.sh@161 -- # true 00:33:43.391 12:56:02 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:33:43.391 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:43.391 12:56:02 -- nvmf/common.sh@162 -- # true 00:33:43.391 12:56:02 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:33:43.391 12:56:02 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:33:43.391 12:56:02 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:33:43.391 12:56:02 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:33:43.391 12:56:02 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:33:43.391 12:56:02 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:33:43.391 12:56:02 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:33:43.391 12:56:02 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:33:43.391 12:56:02 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:33:43.391 12:56:02 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:33:43.391 12:56:02 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:33:43.391 12:56:02 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:33:43.391 12:56:02 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:33:43.391 12:56:02 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:33:43.391 12:56:02 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:33:43.391 12:56:02 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:33:43.391 12:56:02 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:33:43.391 12:56:02 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:33:43.391 12:56:02 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:33:43.391 12:56:02 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:33:43.391 12:56:02 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:33:43.391 12:56:02 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:33:43.391 12:56:02 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:33:43.391 12:56:02 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:33:43.391 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:43.391 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:33:43.391 00:33:43.391 --- 10.0.0.2 ping statistics --- 00:33:43.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:43.391 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:33:43.391 12:56:02 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:33:43.391 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:33:43.391 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:33:43.391 00:33:43.391 --- 10.0.0.3 ping statistics --- 00:33:43.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:43.391 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:33:43.391 12:56:02 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:33:43.391 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:43.391 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:33:43.391 00:33:43.391 --- 10.0.0.1 ping statistics --- 00:33:43.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:43.391 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:33:43.391 12:56:02 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:43.391 12:56:02 -- nvmf/common.sh@421 -- # return 0 00:33:43.391 12:56:02 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:33:43.391 12:56:02 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:43.391 12:56:02 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:33:43.391 12:56:02 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:33:43.391 12:56:02 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:43.391 12:56:02 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:33:43.391 12:56:02 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:33:43.391 12:56:02 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:33:43.391 12:56:02 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:33:43.391 12:56:02 -- common/autotest_common.sh@712 -- # xtrace_disable 00:33:43.391 12:56:02 -- common/autotest_common.sh@10 -- # set +x 00:33:43.391 12:56:02 -- nvmf/common.sh@469 -- # nvmfpid=73145 00:33:43.391 12:56:02 -- nvmf/common.sh@470 -- # waitforlisten 73145 00:33:43.391 12:56:02 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:33:43.391 12:56:02 -- common/autotest_common.sh@819 -- # '[' -z 73145 ']' 00:33:43.391 12:56:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:43.391 12:56:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:33:43.391 12:56:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:43.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:43.391 12:56:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:33:43.391 12:56:02 -- common/autotest_common.sh@10 -- # set +x 00:33:43.651 [2024-07-22 12:56:02.858384] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:33:43.651 [2024-07-22 12:56:02.858539] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:43.651 [2024-07-22 12:56:03.000614] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:43.909 [2024-07-22 12:56:03.103942] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:33:43.909 [2024-07-22 12:56:03.104073] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:43.909 [2024-07-22 12:56:03.104085] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:43.909 [2024-07-22 12:56:03.104094] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:43.909 [2024-07-22 12:56:03.104223] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:43.909 [2024-07-22 12:56:03.104400] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:33:43.909 [2024-07-22 12:56:03.104443] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:33:43.909 [2024-07-22 12:56:03.104444] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:44.474 12:56:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:33:44.474 12:56:03 -- common/autotest_common.sh@852 -- # return 0 00:33:44.474 12:56:03 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:33:44.474 12:56:03 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:44.474 12:56:03 -- common/autotest_common.sh@10 -- # set +x 00:33:44.732 12:56:03 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:44.732 12:56:03 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:44.732 12:56:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:44.732 12:56:03 -- common/autotest_common.sh@10 -- # set +x 00:33:44.732 [2024-07-22 12:56:03.925513] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:44.732 12:56:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:44.732 12:56:03 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:33:44.732 12:56:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:44.732 12:56:03 -- common/autotest_common.sh@10 -- # set +x 00:33:44.732 [2024-07-22 12:56:03.955598] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:33:44.732 12:56:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:44.732 12:56:03 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:33:44.732 12:56:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:44.732 12:56:03 -- common/autotest_common.sh@10 -- # set +x 00:33:44.732 12:56:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:44.732 12:56:03 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:33:44.732 12:56:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:44.732 12:56:03 -- common/autotest_common.sh@10 -- # set +x 00:33:44.732 12:56:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:44.732 12:56:03 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:33:44.732 12:56:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:44.732 12:56:03 -- common/autotest_common.sh@10 -- # set +x 00:33:44.732 12:56:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:44.732 12:56:03 -- target/referrals.sh@48 -- # jq length 00:33:44.732 12:56:03 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:33:44.732 12:56:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:44.732 12:56:03 -- common/autotest_common.sh@10 -- # set +x 00:33:44.732 12:56:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:44.732 12:56:04 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:33:44.732 12:56:04 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:33:44.732 12:56:04 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:33:44.732 12:56:04 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:33:44.732 12:56:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:44.732 12:56:04 -- common/autotest_common.sh@10 -- # set +x 00:33:44.732 12:56:04 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:33:44.732 12:56:04 -- target/referrals.sh@21 -- # sort 00:33:44.732 12:56:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:44.732 12:56:04 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:33:44.732 12:56:04 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:33:44.732 12:56:04 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:33:44.732 12:56:04 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:33:44.732 12:56:04 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:33:44.732 12:56:04 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:864ae4a3-cd96-439b-8ef2-6dab4d992115 --hostid=864ae4a3-cd96-439b-8ef2-6dab4d992115 -t tcp -a 10.0.0.2 -s 8009 -o json 00:33:44.732 12:56:04 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:33:44.732 12:56:04 -- target/referrals.sh@26 -- # sort 00:33:44.992 12:56:04 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:33:44.992 12:56:04 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:33:44.992 12:56:04 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:33:44.992 12:56:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:44.992 12:56:04 -- common/autotest_common.sh@10 -- # set +x 00:33:44.992 12:56:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:44.992 12:56:04 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:33:44.992 12:56:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:44.992 12:56:04 -- common/autotest_common.sh@10 -- # set +x 00:33:44.992 12:56:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:44.992 12:56:04 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:33:44.992 12:56:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:44.992 12:56:04 -- common/autotest_common.sh@10 -- # set +x 00:33:44.992 12:56:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:44.992 12:56:04 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:33:44.992 12:56:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:44.992 12:56:04 -- common/autotest_common.sh@10 -- # set +x 00:33:44.992 12:56:04 -- target/referrals.sh@56 -- # jq length 00:33:44.992 12:56:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:44.992 12:56:04 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:33:44.992 12:56:04 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:33:44.992 12:56:04 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:33:44.992 12:56:04 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:33:44.992 12:56:04 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:864ae4a3-cd96-439b-8ef2-6dab4d992115 --hostid=864ae4a3-cd96-439b-8ef2-6dab4d992115 -t tcp -a 10.0.0.2 -s 8009 -o json 00:33:44.992 12:56:04 -- target/referrals.sh@26 -- # sort 00:33:44.992 12:56:04 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:33:44.992 12:56:04 -- target/referrals.sh@26 -- # echo 00:33:44.992 12:56:04 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:33:44.992 12:56:04 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:33:44.992 12:56:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:44.992 12:56:04 -- common/autotest_common.sh@10 -- # set +x 00:33:44.992 12:56:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:44.992 12:56:04 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:33:44.992 12:56:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:44.992 12:56:04 -- common/autotest_common.sh@10 -- # set +x 00:33:44.992 12:56:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:44.992 12:56:04 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:33:44.992 12:56:04 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:33:44.992 12:56:04 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:33:44.992 12:56:04 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:33:44.992 12:56:04 -- target/referrals.sh@21 -- # sort 00:33:44.992 12:56:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:44.992 12:56:04 -- common/autotest_common.sh@10 -- # set +x 00:33:45.408 12:56:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:45.408 12:56:04 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:33:45.408 12:56:04 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:33:45.408 12:56:04 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:33:45.408 12:56:04 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:33:45.408 12:56:04 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:33:45.408 12:56:04 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:864ae4a3-cd96-439b-8ef2-6dab4d992115 --hostid=864ae4a3-cd96-439b-8ef2-6dab4d992115 -t tcp -a 10.0.0.2 -s 8009 -o json 00:33:45.408 12:56:04 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:33:45.408 12:56:04 -- target/referrals.sh@26 -- # sort 00:33:45.408 12:56:04 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:33:45.408 12:56:04 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:33:45.408 12:56:04 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:33:45.408 12:56:04 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:33:45.408 12:56:04 -- target/referrals.sh@67 -- # jq -r .subnqn 00:33:45.408 12:56:04 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:33:45.408 12:56:04 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:864ae4a3-cd96-439b-8ef2-6dab4d992115 --hostid=864ae4a3-cd96-439b-8ef2-6dab4d992115 -t tcp -a 10.0.0.2 -s 8009 -o json 00:33:45.408 12:56:04 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:33:45.408 12:56:04 -- target/referrals.sh@68 -- # jq -r .subnqn 00:33:45.408 12:56:04 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:33:45.408 12:56:04 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:33:45.408 12:56:04 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:864ae4a3-cd96-439b-8ef2-6dab4d992115 --hostid=864ae4a3-cd96-439b-8ef2-6dab4d992115 -t tcp -a 10.0.0.2 -s 8009 -o json 00:33:45.408 12:56:04 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:33:45.408 12:56:04 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:33:45.408 12:56:04 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:33:45.408 12:56:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:45.408 12:56:04 -- common/autotest_common.sh@10 -- # set +x 00:33:45.408 12:56:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:45.408 12:56:04 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:33:45.408 12:56:04 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:33:45.408 12:56:04 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:33:45.408 12:56:04 -- target/referrals.sh@21 -- # sort 00:33:45.408 12:56:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:45.408 12:56:04 -- common/autotest_common.sh@10 -- # set +x 00:33:45.408 12:56:04 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:33:45.408 12:56:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:45.408 12:56:04 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:33:45.408 12:56:04 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:33:45.408 12:56:04 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:33:45.408 12:56:04 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:33:45.408 12:56:04 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:33:45.409 12:56:04 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:864ae4a3-cd96-439b-8ef2-6dab4d992115 --hostid=864ae4a3-cd96-439b-8ef2-6dab4d992115 -t tcp -a 10.0.0.2 -s 8009 -o json 00:33:45.409 12:56:04 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:33:45.409 12:56:04 -- target/referrals.sh@26 -- # sort 00:33:45.670 12:56:04 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:33:45.670 12:56:04 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:33:45.670 12:56:04 -- target/referrals.sh@75 -- # jq -r .subnqn 00:33:45.670 12:56:04 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:33:45.670 12:56:04 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:33:45.670 12:56:04 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:33:45.670 12:56:04 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:864ae4a3-cd96-439b-8ef2-6dab4d992115 --hostid=864ae4a3-cd96-439b-8ef2-6dab4d992115 -t tcp -a 10.0.0.2 -s 8009 -o json 00:33:45.670 12:56:04 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:33:45.670 12:56:04 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:33:45.670 12:56:04 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:33:45.670 12:56:04 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:864ae4a3-cd96-439b-8ef2-6dab4d992115 --hostid=864ae4a3-cd96-439b-8ef2-6dab4d992115 -t tcp -a 10.0.0.2 -s 8009 -o json 00:33:45.670 12:56:04 -- target/referrals.sh@76 -- # jq -r .subnqn 00:33:45.670 12:56:04 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:33:45.670 12:56:04 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:33:45.670 12:56:04 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:33:45.670 12:56:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:45.670 12:56:04 -- common/autotest_common.sh@10 -- # set +x 00:33:45.670 12:56:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:45.670 12:56:04 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:33:45.670 12:56:04 -- target/referrals.sh@82 -- # jq length 00:33:45.670 12:56:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:45.670 12:56:04 -- common/autotest_common.sh@10 -- # set +x 00:33:45.670 12:56:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:45.670 12:56:05 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:33:45.670 12:56:05 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:33:45.670 12:56:05 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:33:45.670 12:56:05 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:33:45.670 12:56:05 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:864ae4a3-cd96-439b-8ef2-6dab4d992115 --hostid=864ae4a3-cd96-439b-8ef2-6dab4d992115 -t tcp -a 10.0.0.2 -s 8009 -o json 00:33:45.670 12:56:05 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:33:45.671 12:56:05 -- target/referrals.sh@26 -- # sort 00:33:45.671 12:56:05 -- target/referrals.sh@26 -- # echo 00:33:45.671 12:56:05 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:33:45.671 12:56:05 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:33:45.671 12:56:05 -- target/referrals.sh@86 -- # nvmftestfini 00:33:45.671 12:56:05 -- nvmf/common.sh@476 -- # nvmfcleanup 00:33:45.671 12:56:05 -- nvmf/common.sh@116 -- # sync 00:33:45.930 12:56:05 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:33:45.930 12:56:05 -- nvmf/common.sh@119 -- # set +e 00:33:45.930 12:56:05 -- nvmf/common.sh@120 -- # for i in {1..20} 00:33:45.930 12:56:05 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:33:45.930 rmmod nvme_tcp 00:33:45.930 rmmod nvme_fabrics 00:33:45.930 rmmod nvme_keyring 00:33:45.930 12:56:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:33:45.930 12:56:05 -- nvmf/common.sh@123 -- # set -e 00:33:45.930 12:56:05 -- nvmf/common.sh@124 -- # return 0 00:33:45.930 12:56:05 -- nvmf/common.sh@477 -- # '[' -n 73145 ']' 00:33:45.930 12:56:05 -- nvmf/common.sh@478 -- # killprocess 73145 00:33:45.930 12:56:05 -- common/autotest_common.sh@926 -- # '[' -z 73145 ']' 00:33:45.930 12:56:05 -- common/autotest_common.sh@930 -- # kill -0 73145 00:33:45.930 12:56:05 -- common/autotest_common.sh@931 -- # uname 00:33:45.930 12:56:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:33:45.930 12:56:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 73145 00:33:45.930 12:56:05 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:33:45.930 killing process with pid 73145 00:33:45.930 12:56:05 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:33:45.930 12:56:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 73145' 00:33:45.930 12:56:05 -- common/autotest_common.sh@945 -- # kill 73145 00:33:45.930 12:56:05 -- common/autotest_common.sh@950 -- # wait 73145 00:33:46.193 12:56:05 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:33:46.193 12:56:05 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:33:46.193 12:56:05 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:33:46.193 12:56:05 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:46.193 12:56:05 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:33:46.193 12:56:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:46.193 12:56:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:46.193 12:56:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:46.193 12:56:05 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:33:46.193 00:33:46.193 real 0m3.167s 00:33:46.193 user 0m10.415s 00:33:46.193 sys 0m0.822s 00:33:46.193 12:56:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:46.193 12:56:05 -- common/autotest_common.sh@10 -- # set +x 00:33:46.193 ************************************ 00:33:46.193 END TEST nvmf_referrals 00:33:46.193 ************************************ 00:33:46.193 12:56:05 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:33:46.193 12:56:05 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:33:46.193 12:56:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:46.193 12:56:05 -- common/autotest_common.sh@10 -- # set +x 00:33:46.193 ************************************ 00:33:46.193 START TEST nvmf_connect_disconnect 00:33:46.193 ************************************ 00:33:46.193 12:56:05 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:33:46.193 * Looking for test storage... 00:33:46.193 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:33:46.193 12:56:05 -- target/connect_disconnect.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:33:46.193 12:56:05 -- nvmf/common.sh@7 -- # uname -s 00:33:46.193 12:56:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:46.193 12:56:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:46.193 12:56:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:46.193 12:56:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:46.193 12:56:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:46.193 12:56:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:46.193 12:56:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:46.193 12:56:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:46.193 12:56:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:46.193 12:56:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:46.193 12:56:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:864ae4a3-cd96-439b-8ef2-6dab4d992115 00:33:46.193 12:56:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=864ae4a3-cd96-439b-8ef2-6dab4d992115 00:33:46.193 12:56:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:46.193 12:56:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:46.193 12:56:05 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:33:46.193 12:56:05 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:46.193 12:56:05 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:46.193 12:56:05 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:46.193 12:56:05 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:46.193 12:56:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:46.454 12:56:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:46.454 12:56:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:46.454 12:56:05 -- paths/export.sh@5 -- # export PATH 00:33:46.454 12:56:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:46.454 12:56:05 -- nvmf/common.sh@46 -- # : 0 00:33:46.454 12:56:05 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:33:46.454 12:56:05 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:33:46.454 12:56:05 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:33:46.454 12:56:05 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:46.454 12:56:05 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:46.454 12:56:05 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:33:46.454 12:56:05 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:33:46.454 12:56:05 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:33:46.454 12:56:05 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:46.454 12:56:05 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:46.454 12:56:05 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:33:46.454 12:56:05 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:33:46.454 12:56:05 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:46.454 12:56:05 -- nvmf/common.sh@436 -- # prepare_net_devs 00:33:46.454 12:56:05 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:33:46.454 12:56:05 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:33:46.454 12:56:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:46.454 12:56:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:46.454 12:56:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:46.454 12:56:05 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:33:46.454 12:56:05 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:33:46.454 12:56:05 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:33:46.454 12:56:05 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:33:46.454 12:56:05 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:33:46.454 12:56:05 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:33:46.454 12:56:05 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:46.454 12:56:05 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:46.454 12:56:05 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:33:46.454 12:56:05 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:33:46.454 12:56:05 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:33:46.454 12:56:05 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:33:46.454 12:56:05 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:33:46.454 12:56:05 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:46.454 12:56:05 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:33:46.454 12:56:05 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:33:46.454 12:56:05 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:33:46.454 12:56:05 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:33:46.454 12:56:05 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:33:46.454 12:56:05 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:33:46.454 Cannot find device "nvmf_tgt_br" 00:33:46.454 12:56:05 -- nvmf/common.sh@154 -- # true 00:33:46.454 12:56:05 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:33:46.454 Cannot find device "nvmf_tgt_br2" 00:33:46.454 12:56:05 -- nvmf/common.sh@155 -- # true 00:33:46.454 12:56:05 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:33:46.454 12:56:05 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:33:46.454 Cannot find device "nvmf_tgt_br" 00:33:46.454 12:56:05 -- nvmf/common.sh@157 -- # true 00:33:46.454 12:56:05 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:33:46.454 Cannot find device "nvmf_tgt_br2" 00:33:46.454 12:56:05 -- nvmf/common.sh@158 -- # true 00:33:46.454 12:56:05 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:33:46.454 12:56:05 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:33:46.454 12:56:05 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:33:46.454 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:46.454 12:56:05 -- nvmf/common.sh@161 -- # true 00:33:46.454 12:56:05 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:33:46.454 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:46.454 12:56:05 -- nvmf/common.sh@162 -- # true 00:33:46.454 12:56:05 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:33:46.454 12:56:05 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:33:46.454 12:56:05 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:33:46.454 12:56:05 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:33:46.454 12:56:05 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:33:46.454 12:56:05 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:33:46.454 12:56:05 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:33:46.454 12:56:05 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:33:46.714 12:56:05 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:33:46.714 12:56:05 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:33:46.714 12:56:05 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:33:46.714 12:56:05 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:33:46.714 12:56:05 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:33:46.714 12:56:05 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:33:46.714 12:56:05 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:33:46.714 12:56:05 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:33:46.714 12:56:05 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:33:46.714 12:56:05 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:33:46.714 12:56:05 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:33:46.714 12:56:05 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:33:46.714 12:56:05 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:33:46.714 12:56:05 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:33:46.714 12:56:05 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:33:46.714 12:56:05 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:33:46.714 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:46.714 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:33:46.714 00:33:46.714 --- 10.0.0.2 ping statistics --- 00:33:46.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:46.714 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:33:46.714 12:56:05 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:33:46.714 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:33:46.714 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:33:46.714 00:33:46.714 --- 10.0.0.3 ping statistics --- 00:33:46.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:46.714 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:33:46.714 12:56:05 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:33:46.714 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:46.714 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:33:46.714 00:33:46.714 --- 10.0.0.1 ping statistics --- 00:33:46.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:46.714 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:33:46.714 12:56:05 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:46.714 12:56:05 -- nvmf/common.sh@421 -- # return 0 00:33:46.714 12:56:05 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:33:46.714 12:56:05 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:46.714 12:56:05 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:33:46.714 12:56:05 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:33:46.714 12:56:05 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:46.714 12:56:05 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:33:46.714 12:56:05 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:33:46.714 12:56:06 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:33:46.714 12:56:06 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:33:46.714 12:56:06 -- common/autotest_common.sh@712 -- # xtrace_disable 00:33:46.714 12:56:06 -- common/autotest_common.sh@10 -- # set +x 00:33:46.714 12:56:06 -- nvmf/common.sh@469 -- # nvmfpid=73448 00:33:46.714 12:56:06 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:33:46.714 12:56:06 -- nvmf/common.sh@470 -- # waitforlisten 73448 00:33:46.714 12:56:06 -- common/autotest_common.sh@819 -- # '[' -z 73448 ']' 00:33:46.714 12:56:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:46.714 12:56:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:33:46.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:46.714 12:56:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:46.714 12:56:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:33:46.714 12:56:06 -- common/autotest_common.sh@10 -- # set +x 00:33:46.714 [2024-07-22 12:56:06.073641] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:33:46.714 [2024-07-22 12:56:06.073758] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:46.973 [2024-07-22 12:56:06.207900] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:46.973 [2024-07-22 12:56:06.310246] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:33:46.973 [2024-07-22 12:56:06.310573] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:46.973 [2024-07-22 12:56:06.310710] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:46.973 [2024-07-22 12:56:06.310815] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:46.973 [2024-07-22 12:56:06.311017] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:46.973 [2024-07-22 12:56:06.311245] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:46.973 [2024-07-22 12:56:06.311076] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:33:46.973 [2024-07-22 12:56:06.311233] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:33:47.908 12:56:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:33:47.908 12:56:07 -- common/autotest_common.sh@852 -- # return 0 00:33:47.908 12:56:07 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:33:47.908 12:56:07 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:47.908 12:56:07 -- common/autotest_common.sh@10 -- # set +x 00:33:47.908 12:56:07 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:47.908 12:56:07 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:33:47.908 12:56:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:47.908 12:56:07 -- common/autotest_common.sh@10 -- # set +x 00:33:47.908 [2024-07-22 12:56:07.137573] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:47.908 12:56:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:47.908 12:56:07 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:33:47.908 12:56:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:47.908 12:56:07 -- common/autotest_common.sh@10 -- # set +x 00:33:47.908 12:56:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:47.908 12:56:07 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:33:47.908 12:56:07 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:33:47.908 12:56:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:47.908 12:56:07 -- common/autotest_common.sh@10 -- # set +x 00:33:47.908 12:56:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:47.908 12:56:07 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:47.908 12:56:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:47.908 12:56:07 -- common/autotest_common.sh@10 -- # set +x 00:33:47.908 12:56:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:47.908 12:56:07 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:47.908 12:56:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:47.908 12:56:07 -- common/autotest_common.sh@10 -- # set +x 00:33:47.908 [2024-07-22 12:56:07.209039] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:47.908 12:56:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:47.908 12:56:07 -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:33:47.908 12:56:07 -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:33:47.908 12:56:07 -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:33:47.908 12:56:07 -- target/connect_disconnect.sh@34 -- # set +x 00:33:50.439 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:33:52.336 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:33:54.866 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:33:56.837 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:33:59.369 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:01.271 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:03.799 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:06.328 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:08.264 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:10.168 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:12.700 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:14.604 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:17.170 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:19.703 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:21.605 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:24.137 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:26.039 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:28.639 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:30.539 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:33.072 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:34.972 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:37.501 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:39.400 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:42.050 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:43.953 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:46.479 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:48.440 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:50.442 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:52.975 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:55.505 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:57.405 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:59.329 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:01.856 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:03.759 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:06.290 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:08.819 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:10.721 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:13.250 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:15.178 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:17.708 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:19.649 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:22.181 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:24.084 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:26.666 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:28.597 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:31.127 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:33.096 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:35.626 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:37.529 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:40.060 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:41.964 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:44.494 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:46.393 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:48.920 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:50.821 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:53.350 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:55.328 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:57.859 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:59.760 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:36:02.289 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:36:04.191 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:36:06.735 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:36:08.636 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:36:11.167 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:36:13.068 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:36:15.598 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:36:18.127 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:36:20.048 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:36:22.590 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:36:24.492 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:36:27.092 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:36:28.994 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:36:31.525 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:36:33.427 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:36:35.957 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:36:37.859 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:36:40.386 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:36:42.286 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:36:44.814 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:36:46.711 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:36:49.239 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:36:51.137 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:36:53.667 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:36:55.565 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:36:58.122 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:37:00.023 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:37:01.978 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:37:04.508 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:37:06.441 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:37:08.971 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:37:10.871 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:37:13.479 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:37:15.386 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:37:17.917 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:37:20.458 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:37:22.359 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:37:24.887 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:37:26.787 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:37:29.314 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:37:31.215 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:37:31.215 12:59:50 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:37:31.215 12:59:50 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:37:31.215 12:59:50 -- nvmf/common.sh@476 -- # nvmfcleanup 00:37:31.215 12:59:50 -- nvmf/common.sh@116 -- # sync 00:37:31.215 12:59:50 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:37:31.215 12:59:50 -- nvmf/common.sh@119 -- # set +e 00:37:31.215 12:59:50 -- nvmf/common.sh@120 -- # for i in {1..20} 00:37:31.215 12:59:50 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:37:31.215 rmmod nvme_tcp 00:37:31.215 rmmod nvme_fabrics 00:37:31.215 rmmod nvme_keyring 00:37:31.215 12:59:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:37:31.215 12:59:50 -- nvmf/common.sh@123 -- # set -e 00:37:31.215 12:59:50 -- nvmf/common.sh@124 -- # return 0 00:37:31.215 12:59:50 -- nvmf/common.sh@477 -- # '[' -n 73448 ']' 00:37:31.215 12:59:50 -- nvmf/common.sh@478 -- # killprocess 73448 00:37:31.215 12:59:50 -- common/autotest_common.sh@926 -- # '[' -z 73448 ']' 00:37:31.215 12:59:50 -- common/autotest_common.sh@930 -- # kill -0 73448 00:37:31.215 12:59:50 -- common/autotest_common.sh@931 -- # uname 00:37:31.215 12:59:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:37:31.215 12:59:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 73448 00:37:31.215 12:59:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:37:31.215 12:59:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:37:31.215 12:59:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 73448' 00:37:31.215 killing process with pid 73448 00:37:31.215 12:59:50 -- common/autotest_common.sh@945 -- # kill 73448 00:37:31.215 12:59:50 -- common/autotest_common.sh@950 -- # wait 73448 00:37:31.473 12:59:50 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:37:31.473 12:59:50 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:37:31.473 12:59:50 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:37:31.473 12:59:50 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:37:31.473 12:59:50 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:37:31.473 12:59:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:31.473 12:59:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:31.473 12:59:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:31.473 12:59:50 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:37:31.473 00:37:31.473 real 3m45.326s 00:37:31.473 user 14m36.375s 00:37:31.473 sys 0m25.887s 00:37:31.473 12:59:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:31.473 12:59:50 -- common/autotest_common.sh@10 -- # set +x 00:37:31.473 ************************************ 00:37:31.473 END TEST nvmf_connect_disconnect 00:37:31.473 ************************************ 00:37:31.473 12:59:50 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:37:31.473 12:59:50 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:37:31.473 12:59:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:37:31.473 12:59:50 -- common/autotest_common.sh@10 -- # set +x 00:37:31.731 ************************************ 00:37:31.731 START TEST nvmf_multitarget 00:37:31.731 ************************************ 00:37:31.731 12:59:50 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:37:31.731 * Looking for test storage... 00:37:31.731 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:37:31.731 12:59:50 -- target/multitarget.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:37:31.731 12:59:50 -- nvmf/common.sh@7 -- # uname -s 00:37:31.731 12:59:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:31.731 12:59:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:31.731 12:59:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:31.731 12:59:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:31.731 12:59:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:31.731 12:59:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:31.731 12:59:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:31.731 12:59:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:31.731 12:59:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:31.731 12:59:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:31.731 12:59:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:864ae4a3-cd96-439b-8ef2-6dab4d992115 00:37:31.731 12:59:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=864ae4a3-cd96-439b-8ef2-6dab4d992115 00:37:31.731 12:59:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:31.731 12:59:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:31.731 12:59:50 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:37:31.731 12:59:50 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:37:31.731 12:59:50 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:31.731 12:59:50 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:31.731 12:59:50 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:31.731 12:59:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:31.731 12:59:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:31.731 12:59:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:31.731 12:59:50 -- paths/export.sh@5 -- # export PATH 00:37:31.731 12:59:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:31.731 12:59:50 -- nvmf/common.sh@46 -- # : 0 00:37:31.731 12:59:50 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:37:31.731 12:59:50 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:37:31.731 12:59:50 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:37:31.731 12:59:50 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:31.731 12:59:50 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:31.731 12:59:50 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:37:31.731 12:59:50 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:37:31.731 12:59:50 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:37:31.731 12:59:50 -- target/multitarget.sh@13 -- # rpc_py=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:37:31.731 12:59:50 -- target/multitarget.sh@15 -- # nvmftestinit 00:37:31.731 12:59:50 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:37:31.731 12:59:50 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:31.732 12:59:50 -- nvmf/common.sh@436 -- # prepare_net_devs 00:37:31.732 12:59:50 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:37:31.732 12:59:50 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:37:31.732 12:59:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:31.732 12:59:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:31.732 12:59:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:31.732 12:59:50 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:37:31.732 12:59:50 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:37:31.732 12:59:50 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:37:31.732 12:59:50 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:37:31.732 12:59:50 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:37:31.732 12:59:50 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:37:31.732 12:59:50 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:31.732 12:59:50 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:31.732 12:59:50 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:37:31.732 12:59:50 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:37:31.732 12:59:50 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:37:31.732 12:59:50 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:37:31.732 12:59:50 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:37:31.732 12:59:50 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:31.732 12:59:50 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:37:31.732 12:59:50 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:37:31.732 12:59:50 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:37:31.732 12:59:50 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:37:31.732 12:59:50 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:37:31.732 12:59:51 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:37:31.732 Cannot find device "nvmf_tgt_br" 00:37:31.732 12:59:51 -- nvmf/common.sh@154 -- # true 00:37:31.732 12:59:51 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:37:31.732 Cannot find device "nvmf_tgt_br2" 00:37:31.732 12:59:51 -- nvmf/common.sh@155 -- # true 00:37:31.732 12:59:51 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:37:31.732 12:59:51 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:37:31.732 Cannot find device "nvmf_tgt_br" 00:37:31.732 12:59:51 -- nvmf/common.sh@157 -- # true 00:37:31.732 12:59:51 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:37:31.732 Cannot find device "nvmf_tgt_br2" 00:37:31.732 12:59:51 -- nvmf/common.sh@158 -- # true 00:37:31.732 12:59:51 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:37:31.732 12:59:51 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:37:31.732 12:59:51 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:37:31.732 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:37:31.732 12:59:51 -- nvmf/common.sh@161 -- # true 00:37:31.732 12:59:51 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:37:31.732 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:37:31.732 12:59:51 -- nvmf/common.sh@162 -- # true 00:37:31.732 12:59:51 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:37:31.990 12:59:51 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:37:31.990 12:59:51 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:37:31.990 12:59:51 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:37:31.990 12:59:51 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:37:31.990 12:59:51 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:37:31.990 12:59:51 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:37:31.990 12:59:51 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:37:31.990 12:59:51 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:37:31.990 12:59:51 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:37:31.990 12:59:51 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:37:31.990 12:59:51 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:37:31.990 12:59:51 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:37:31.990 12:59:51 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:37:31.990 12:59:51 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:37:31.990 12:59:51 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:37:31.990 12:59:51 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:37:31.990 12:59:51 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:37:31.990 12:59:51 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:37:31.990 12:59:51 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:37:31.990 12:59:51 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:37:31.990 12:59:51 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:37:31.990 12:59:51 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:37:31.990 12:59:51 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:37:31.990 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:31.990 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:37:31.990 00:37:31.990 --- 10.0.0.2 ping statistics --- 00:37:31.990 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:31.991 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:37:31.991 12:59:51 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:37:31.991 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:37:31.991 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:37:31.991 00:37:31.991 --- 10.0.0.3 ping statistics --- 00:37:31.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:31.991 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:37:31.991 12:59:51 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:37:31.991 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:31.991 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:37:31.991 00:37:31.991 --- 10.0.0.1 ping statistics --- 00:37:31.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:31.991 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:37:31.991 12:59:51 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:31.991 12:59:51 -- nvmf/common.sh@421 -- # return 0 00:37:31.991 12:59:51 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:37:31.991 12:59:51 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:31.991 12:59:51 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:37:31.991 12:59:51 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:37:31.991 12:59:51 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:31.991 12:59:51 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:37:31.991 12:59:51 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:37:31.991 12:59:51 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:37:31.991 12:59:51 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:37:31.991 12:59:51 -- common/autotest_common.sh@712 -- # xtrace_disable 00:37:31.991 12:59:51 -- common/autotest_common.sh@10 -- # set +x 00:37:31.991 12:59:51 -- nvmf/common.sh@469 -- # nvmfpid=77219 00:37:31.991 12:59:51 -- nvmf/common.sh@470 -- # waitforlisten 77219 00:37:31.991 12:59:51 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:37:31.991 12:59:51 -- common/autotest_common.sh@819 -- # '[' -z 77219 ']' 00:37:31.991 12:59:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:31.991 12:59:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:37:31.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:31.991 12:59:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:31.991 12:59:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:37:31.991 12:59:51 -- common/autotest_common.sh@10 -- # set +x 00:37:31.991 [2024-07-22 12:59:51.406420] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:37:31.991 [2024-07-22 12:59:51.406506] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:32.249 [2024-07-22 12:59:51.544003] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:32.249 [2024-07-22 12:59:51.642382] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:37:32.249 [2024-07-22 12:59:51.642829] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:32.249 [2024-07-22 12:59:51.642892] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:32.249 [2024-07-22 12:59:51.643091] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:32.249 [2024-07-22 12:59:51.643310] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:37:32.249 [2024-07-22 12:59:51.643423] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:37:32.249 [2024-07-22 12:59:51.643424] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:32.249 [2024-07-22 12:59:51.643351] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:37:33.183 12:59:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:37:33.183 12:59:52 -- common/autotest_common.sh@852 -- # return 0 00:37:33.183 12:59:52 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:37:33.183 12:59:52 -- common/autotest_common.sh@718 -- # xtrace_disable 00:37:33.183 12:59:52 -- common/autotest_common.sh@10 -- # set +x 00:37:33.183 12:59:52 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:33.183 12:59:52 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:37:33.183 12:59:52 -- target/multitarget.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:37:33.183 12:59:52 -- target/multitarget.sh@21 -- # jq length 00:37:33.441 12:59:52 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:37:33.441 12:59:52 -- target/multitarget.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:37:33.441 "nvmf_tgt_1" 00:37:33.441 12:59:52 -- target/multitarget.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:37:33.705 "nvmf_tgt_2" 00:37:33.705 12:59:52 -- target/multitarget.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:37:33.705 12:59:52 -- target/multitarget.sh@28 -- # jq length 00:37:33.705 12:59:53 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:37:33.705 12:59:53 -- target/multitarget.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:37:33.973 true 00:37:33.973 12:59:53 -- target/multitarget.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:37:33.973 true 00:37:33.973 12:59:53 -- target/multitarget.sh@35 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:37:33.973 12:59:53 -- target/multitarget.sh@35 -- # jq length 00:37:34.232 12:59:53 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:37:34.232 12:59:53 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:37:34.232 12:59:53 -- target/multitarget.sh@41 -- # nvmftestfini 00:37:34.232 12:59:53 -- nvmf/common.sh@476 -- # nvmfcleanup 00:37:34.232 12:59:53 -- nvmf/common.sh@116 -- # sync 00:37:34.232 12:59:53 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:37:34.232 12:59:53 -- nvmf/common.sh@119 -- # set +e 00:37:34.232 12:59:53 -- nvmf/common.sh@120 -- # for i in {1..20} 00:37:34.232 12:59:53 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:37:34.232 rmmod nvme_tcp 00:37:34.232 rmmod nvme_fabrics 00:37:34.232 rmmod nvme_keyring 00:37:34.232 12:59:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:37:34.232 12:59:53 -- nvmf/common.sh@123 -- # set -e 00:37:34.232 12:59:53 -- nvmf/common.sh@124 -- # return 0 00:37:34.232 12:59:53 -- nvmf/common.sh@477 -- # '[' -n 77219 ']' 00:37:34.232 12:59:53 -- nvmf/common.sh@478 -- # killprocess 77219 00:37:34.232 12:59:53 -- common/autotest_common.sh@926 -- # '[' -z 77219 ']' 00:37:34.232 12:59:53 -- common/autotest_common.sh@930 -- # kill -0 77219 00:37:34.232 12:59:53 -- common/autotest_common.sh@931 -- # uname 00:37:34.232 12:59:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:37:34.232 12:59:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 77219 00:37:34.232 killing process with pid 77219 00:37:34.232 12:59:53 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:37:34.232 12:59:53 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:37:34.232 12:59:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 77219' 00:37:34.232 12:59:53 -- common/autotest_common.sh@945 -- # kill 77219 00:37:34.232 12:59:53 -- common/autotest_common.sh@950 -- # wait 77219 00:37:34.491 12:59:53 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:37:34.491 12:59:53 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:37:34.491 12:59:53 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:37:34.491 12:59:53 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:37:34.491 12:59:53 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:37:34.491 12:59:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:34.491 12:59:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:34.491 12:59:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:34.491 12:59:53 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:37:34.491 ************************************ 00:37:34.491 END TEST nvmf_multitarget 00:37:34.491 ************************************ 00:37:34.491 00:37:34.491 real 0m2.931s 00:37:34.491 user 0m9.832s 00:37:34.491 sys 0m0.692s 00:37:34.491 12:59:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:34.491 12:59:53 -- common/autotest_common.sh@10 -- # set +x 00:37:34.491 12:59:53 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:37:34.491 12:59:53 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:37:34.491 12:59:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:37:34.491 12:59:53 -- common/autotest_common.sh@10 -- # set +x 00:37:34.491 ************************************ 00:37:34.491 START TEST nvmf_rpc 00:37:34.491 ************************************ 00:37:34.491 12:59:53 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:37:34.749 * Looking for test storage... 00:37:34.749 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:37:34.749 12:59:53 -- target/rpc.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:37:34.749 12:59:53 -- nvmf/common.sh@7 -- # uname -s 00:37:34.749 12:59:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:34.749 12:59:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:34.749 12:59:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:34.749 12:59:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:34.749 12:59:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:34.749 12:59:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:34.749 12:59:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:34.749 12:59:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:34.749 12:59:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:34.749 12:59:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:34.749 12:59:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:864ae4a3-cd96-439b-8ef2-6dab4d992115 00:37:34.749 12:59:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=864ae4a3-cd96-439b-8ef2-6dab4d992115 00:37:34.749 12:59:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:34.750 12:59:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:34.750 12:59:53 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:37:34.750 12:59:53 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:37:34.750 12:59:53 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:34.750 12:59:53 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:34.750 12:59:53 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:34.750 12:59:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:34.750 12:59:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:34.750 12:59:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:34.750 12:59:53 -- paths/export.sh@5 -- # export PATH 00:37:34.750 12:59:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:34.750 12:59:53 -- nvmf/common.sh@46 -- # : 0 00:37:34.750 12:59:53 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:37:34.750 12:59:53 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:37:34.750 12:59:53 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:37:34.750 12:59:53 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:34.750 12:59:53 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:34.750 12:59:53 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:37:34.750 12:59:53 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:37:34.750 12:59:53 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:37:34.750 12:59:53 -- target/rpc.sh@11 -- # loops=5 00:37:34.750 12:59:53 -- target/rpc.sh@23 -- # nvmftestinit 00:37:34.750 12:59:53 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:37:34.750 12:59:53 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:34.750 12:59:53 -- nvmf/common.sh@436 -- # prepare_net_devs 00:37:34.750 12:59:53 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:37:34.750 12:59:53 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:37:34.750 12:59:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:34.750 12:59:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:34.750 12:59:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:34.750 12:59:53 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:37:34.750 12:59:53 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:37:34.750 12:59:53 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:37:34.750 12:59:53 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:37:34.750 12:59:53 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:37:34.750 12:59:53 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:37:34.750 12:59:53 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:34.750 12:59:53 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:34.750 12:59:53 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:37:34.750 12:59:53 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:37:34.750 12:59:53 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:37:34.750 12:59:53 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:37:34.750 12:59:53 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:37:34.750 12:59:53 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:34.750 12:59:53 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:37:34.750 12:59:53 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:37:34.750 12:59:53 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:37:34.750 12:59:53 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:37:34.750 12:59:53 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:37:34.750 12:59:54 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:37:34.750 Cannot find device "nvmf_tgt_br" 00:37:34.750 12:59:54 -- nvmf/common.sh@154 -- # true 00:37:34.750 12:59:54 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:37:34.750 Cannot find device "nvmf_tgt_br2" 00:37:34.750 12:59:54 -- nvmf/common.sh@155 -- # true 00:37:34.750 12:59:54 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:37:34.750 12:59:54 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:37:34.750 Cannot find device "nvmf_tgt_br" 00:37:34.750 12:59:54 -- nvmf/common.sh@157 -- # true 00:37:34.750 12:59:54 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:37:34.750 Cannot find device "nvmf_tgt_br2" 00:37:34.750 12:59:54 -- nvmf/common.sh@158 -- # true 00:37:34.750 12:59:54 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:37:34.750 12:59:54 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:37:34.750 12:59:54 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:37:34.750 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:37:34.750 12:59:54 -- nvmf/common.sh@161 -- # true 00:37:34.750 12:59:54 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:37:34.750 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:37:34.750 12:59:54 -- nvmf/common.sh@162 -- # true 00:37:34.750 12:59:54 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:37:34.750 12:59:54 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:37:34.750 12:59:54 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:37:34.750 12:59:54 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:37:34.750 12:59:54 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:37:35.009 12:59:54 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:37:35.009 12:59:54 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:37:35.009 12:59:54 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:37:35.009 12:59:54 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:37:35.009 12:59:54 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:37:35.009 12:59:54 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:37:35.009 12:59:54 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:37:35.009 12:59:54 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:37:35.009 12:59:54 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:37:35.009 12:59:54 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:37:35.009 12:59:54 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:37:35.009 12:59:54 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:37:35.009 12:59:54 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:37:35.009 12:59:54 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:37:35.009 12:59:54 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:37:35.009 12:59:54 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:37:35.009 12:59:54 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:37:35.009 12:59:54 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:37:35.009 12:59:54 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:37:35.009 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:35.009 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.112 ms 00:37:35.009 00:37:35.009 --- 10.0.0.2 ping statistics --- 00:37:35.009 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:35.009 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:37:35.009 12:59:54 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:37:35.009 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:37:35.009 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:37:35.009 00:37:35.009 --- 10.0.0.3 ping statistics --- 00:37:35.009 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:35.009 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:37:35.009 12:59:54 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:37:35.009 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:35.009 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:37:35.009 00:37:35.009 --- 10.0.0.1 ping statistics --- 00:37:35.009 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:35.009 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:37:35.009 12:59:54 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:35.009 12:59:54 -- nvmf/common.sh@421 -- # return 0 00:37:35.009 12:59:54 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:37:35.009 12:59:54 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:35.009 12:59:54 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:37:35.009 12:59:54 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:37:35.009 12:59:54 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:35.009 12:59:54 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:37:35.009 12:59:54 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:37:35.009 12:59:54 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:37:35.009 12:59:54 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:37:35.009 12:59:54 -- common/autotest_common.sh@712 -- # xtrace_disable 00:37:35.009 12:59:54 -- common/autotest_common.sh@10 -- # set +x 00:37:35.009 12:59:54 -- nvmf/common.sh@469 -- # nvmfpid=77446 00:37:35.009 12:59:54 -- nvmf/common.sh@470 -- # waitforlisten 77446 00:37:35.009 12:59:54 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:37:35.009 12:59:54 -- common/autotest_common.sh@819 -- # '[' -z 77446 ']' 00:37:35.009 12:59:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:35.009 12:59:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:37:35.009 12:59:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:35.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:35.009 12:59:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:37:35.009 12:59:54 -- common/autotest_common.sh@10 -- # set +x 00:37:35.009 [2024-07-22 12:59:54.420940] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:37:35.009 [2024-07-22 12:59:54.421039] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:35.268 [2024-07-22 12:59:54.561378] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:35.268 [2024-07-22 12:59:54.665379] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:37:35.268 [2024-07-22 12:59:54.665549] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:35.268 [2024-07-22 12:59:54.665564] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:35.268 [2024-07-22 12:59:54.665575] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:35.268 [2024-07-22 12:59:54.666036] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:37:35.268 [2024-07-22 12:59:54.666196] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:37:35.268 [2024-07-22 12:59:54.666288] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:37:35.268 [2024-07-22 12:59:54.666297] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:36.239 12:59:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:37:36.239 12:59:55 -- common/autotest_common.sh@852 -- # return 0 00:37:36.239 12:59:55 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:37:36.239 12:59:55 -- common/autotest_common.sh@718 -- # xtrace_disable 00:37:36.239 12:59:55 -- common/autotest_common.sh@10 -- # set +x 00:37:36.239 12:59:55 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:36.239 12:59:55 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:37:36.239 12:59:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:36.239 12:59:55 -- common/autotest_common.sh@10 -- # set +x 00:37:36.239 12:59:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:36.239 12:59:55 -- target/rpc.sh@26 -- # stats='{ 00:37:36.239 "poll_groups": [ 00:37:36.239 { 00:37:36.239 "admin_qpairs": 0, 00:37:36.239 "completed_nvme_io": 0, 00:37:36.239 "current_admin_qpairs": 0, 00:37:36.239 "current_io_qpairs": 0, 00:37:36.239 "io_qpairs": 0, 00:37:36.239 "name": "nvmf_tgt_poll_group_0", 00:37:36.239 "pending_bdev_io": 0, 00:37:36.239 "transports": [] 00:37:36.239 }, 00:37:36.239 { 00:37:36.239 "admin_qpairs": 0, 00:37:36.239 "completed_nvme_io": 0, 00:37:36.239 "current_admin_qpairs": 0, 00:37:36.239 "current_io_qpairs": 0, 00:37:36.239 "io_qpairs": 0, 00:37:36.239 "name": "nvmf_tgt_poll_group_1", 00:37:36.239 "pending_bdev_io": 0, 00:37:36.239 "transports": [] 00:37:36.239 }, 00:37:36.239 { 00:37:36.239 "admin_qpairs": 0, 00:37:36.239 "completed_nvme_io": 0, 00:37:36.239 "current_admin_qpairs": 0, 00:37:36.239 "current_io_qpairs": 0, 00:37:36.239 "io_qpairs": 0, 00:37:36.239 "name": "nvmf_tgt_poll_group_2", 00:37:36.239 "pending_bdev_io": 0, 00:37:36.239 "transports": [] 00:37:36.239 }, 00:37:36.239 { 00:37:36.239 "admin_qpairs": 0, 00:37:36.239 "completed_nvme_io": 0, 00:37:36.239 "current_admin_qpairs": 0, 00:37:36.239 "current_io_qpairs": 0, 00:37:36.239 "io_qpairs": 0, 00:37:36.239 "name": "nvmf_tgt_poll_group_3", 00:37:36.239 "pending_bdev_io": 0, 00:37:36.239 "transports": [] 00:37:36.239 } 00:37:36.239 ], 00:37:36.239 "tick_rate": 2200000000 00:37:36.239 }' 00:37:36.239 12:59:55 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:37:36.239 12:59:55 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:37:36.239 12:59:55 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:37:36.239 12:59:55 -- target/rpc.sh@15 -- # wc -l 00:37:36.239 12:59:55 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:37:36.239 12:59:55 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:37:36.239 12:59:55 -- target/rpc.sh@29 -- # [[ null == null ]] 00:37:36.239 12:59:55 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:36.239 12:59:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:36.239 12:59:55 -- common/autotest_common.sh@10 -- # set +x 00:37:36.239 [2024-07-22 12:59:55.620914] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:36.239 12:59:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:36.239 12:59:55 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:37:36.239 12:59:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:36.239 12:59:55 -- common/autotest_common.sh@10 -- # set +x 00:37:36.498 12:59:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:36.498 12:59:55 -- target/rpc.sh@33 -- # stats='{ 00:37:36.498 "poll_groups": [ 00:37:36.498 { 00:37:36.498 "admin_qpairs": 0, 00:37:36.498 "completed_nvme_io": 0, 00:37:36.498 "current_admin_qpairs": 0, 00:37:36.498 "current_io_qpairs": 0, 00:37:36.498 "io_qpairs": 0, 00:37:36.498 "name": "nvmf_tgt_poll_group_0", 00:37:36.498 "pending_bdev_io": 0, 00:37:36.498 "transports": [ 00:37:36.498 { 00:37:36.498 "trtype": "TCP" 00:37:36.498 } 00:37:36.498 ] 00:37:36.498 }, 00:37:36.498 { 00:37:36.498 "admin_qpairs": 0, 00:37:36.498 "completed_nvme_io": 0, 00:37:36.498 "current_admin_qpairs": 0, 00:37:36.498 "current_io_qpairs": 0, 00:37:36.498 "io_qpairs": 0, 00:37:36.498 "name": "nvmf_tgt_poll_group_1", 00:37:36.498 "pending_bdev_io": 0, 00:37:36.498 "transports": [ 00:37:36.498 { 00:37:36.498 "trtype": "TCP" 00:37:36.498 } 00:37:36.498 ] 00:37:36.498 }, 00:37:36.498 { 00:37:36.498 "admin_qpairs": 0, 00:37:36.498 "completed_nvme_io": 0, 00:37:36.498 "current_admin_qpairs": 0, 00:37:36.498 "current_io_qpairs": 0, 00:37:36.498 "io_qpairs": 0, 00:37:36.498 "name": "nvmf_tgt_poll_group_2", 00:37:36.498 "pending_bdev_io": 0, 00:37:36.498 "transports": [ 00:37:36.498 { 00:37:36.498 "trtype": "TCP" 00:37:36.498 } 00:37:36.498 ] 00:37:36.498 }, 00:37:36.498 { 00:37:36.498 "admin_qpairs": 0, 00:37:36.498 "completed_nvme_io": 0, 00:37:36.498 "current_admin_qpairs": 0, 00:37:36.498 "current_io_qpairs": 0, 00:37:36.498 "io_qpairs": 0, 00:37:36.498 "name": "nvmf_tgt_poll_group_3", 00:37:36.498 "pending_bdev_io": 0, 00:37:36.498 "transports": [ 00:37:36.498 { 00:37:36.498 "trtype": "TCP" 00:37:36.498 } 00:37:36.498 ] 00:37:36.498 } 00:37:36.498 ], 00:37:36.498 "tick_rate": 2200000000 00:37:36.498 }' 00:37:36.498 12:59:55 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:37:36.498 12:59:55 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:37:36.498 12:59:55 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:37:36.498 12:59:55 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:37:36.498 12:59:55 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:37:36.498 12:59:55 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:37:36.498 12:59:55 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:37:36.498 12:59:55 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:37:36.498 12:59:55 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:37:36.498 12:59:55 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:37:36.498 12:59:55 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:37:36.498 12:59:55 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:37:36.498 12:59:55 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:37:36.498 12:59:55 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:37:36.498 12:59:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:36.498 12:59:55 -- common/autotest_common.sh@10 -- # set +x 00:37:36.498 Malloc1 00:37:36.498 12:59:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:36.498 12:59:55 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:37:36.498 12:59:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:36.498 12:59:55 -- common/autotest_common.sh@10 -- # set +x 00:37:36.498 12:59:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:36.498 12:59:55 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:37:36.498 12:59:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:36.498 12:59:55 -- common/autotest_common.sh@10 -- # set +x 00:37:36.498 12:59:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:36.498 12:59:55 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:37:36.498 12:59:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:36.498 12:59:55 -- common/autotest_common.sh@10 -- # set +x 00:37:36.498 12:59:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:36.498 12:59:55 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:36.498 12:59:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:36.498 12:59:55 -- common/autotest_common.sh@10 -- # set +x 00:37:36.498 [2024-07-22 12:59:55.845921] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:36.498 12:59:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:36.498 12:59:55 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:864ae4a3-cd96-439b-8ef2-6dab4d992115 --hostid=864ae4a3-cd96-439b-8ef2-6dab4d992115 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:864ae4a3-cd96-439b-8ef2-6dab4d992115 -a 10.0.0.2 -s 4420 00:37:36.498 12:59:55 -- common/autotest_common.sh@640 -- # local es=0 00:37:36.498 12:59:55 -- common/autotest_common.sh@642 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:864ae4a3-cd96-439b-8ef2-6dab4d992115 --hostid=864ae4a3-cd96-439b-8ef2-6dab4d992115 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:864ae4a3-cd96-439b-8ef2-6dab4d992115 -a 10.0.0.2 -s 4420 00:37:36.498 12:59:55 -- common/autotest_common.sh@628 -- # local arg=nvme 00:37:36.498 12:59:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:37:36.498 12:59:55 -- common/autotest_common.sh@632 -- # type -t nvme 00:37:36.498 12:59:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:37:36.498 12:59:55 -- common/autotest_common.sh@634 -- # type -P nvme 00:37:36.498 12:59:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:37:36.498 12:59:55 -- common/autotest_common.sh@634 -- # arg=/usr/sbin/nvme 00:37:36.498 12:59:55 -- common/autotest_common.sh@634 -- # [[ -x /usr/sbin/nvme ]] 00:37:36.499 12:59:55 -- common/autotest_common.sh@643 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:864ae4a3-cd96-439b-8ef2-6dab4d992115 --hostid=864ae4a3-cd96-439b-8ef2-6dab4d992115 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:864ae4a3-cd96-439b-8ef2-6dab4d992115 -a 10.0.0.2 -s 4420 00:37:36.499 [2024-07-22 12:59:55.876264] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:864ae4a3-cd96-439b-8ef2-6dab4d992115' 00:37:36.499 Failed to write to /dev/nvme-fabrics: Input/output error 00:37:36.499 could not add new controller: failed to write to nvme-fabrics device 00:37:36.499 12:59:55 -- common/autotest_common.sh@643 -- # es=1 00:37:36.499 12:59:55 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:37:36.499 12:59:55 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:37:36.499 12:59:55 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:37:36.499 12:59:55 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:864ae4a3-cd96-439b-8ef2-6dab4d992115 00:37:36.499 12:59:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:36.499 12:59:55 -- common/autotest_common.sh@10 -- # set +x 00:37:36.499 12:59:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:36.499 12:59:55 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:864ae4a3-cd96-439b-8ef2-6dab4d992115 --hostid=864ae4a3-cd96-439b-8ef2-6dab4d992115 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:37:36.757 12:59:56 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:37:36.757 12:59:56 -- common/autotest_common.sh@1177 -- # local i=0 00:37:36.757 12:59:56 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:37:36.757 12:59:56 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:37:36.757 12:59:56 -- common/autotest_common.sh@1184 -- # sleep 2 00:37:38.661 12:59:58 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:37:38.661 12:59:58 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:37:38.661 12:59:58 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:37:38.920 12:59:58 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:37:38.920 12:59:58 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:37:38.920 12:59:58 -- common/autotest_common.sh@1187 -- # return 0 00:37:38.920 12:59:58 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:37:38.920 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:37:38.920 12:59:58 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:37:38.920 12:59:58 -- common/autotest_common.sh@1198 -- # local i=0 00:37:38.920 12:59:58 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:37:38.920 12:59:58 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:37:38.920 12:59:58 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:37:38.920 12:59:58 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:37:38.920 12:59:58 -- common/autotest_common.sh@1210 -- # return 0 00:37:38.920 12:59:58 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:864ae4a3-cd96-439b-8ef2-6dab4d992115 00:37:38.920 12:59:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:38.920 12:59:58 -- common/autotest_common.sh@10 -- # set +x 00:37:38.920 12:59:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:38.920 12:59:58 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:864ae4a3-cd96-439b-8ef2-6dab4d992115 --hostid=864ae4a3-cd96-439b-8ef2-6dab4d992115 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:37:38.920 12:59:58 -- common/autotest_common.sh@640 -- # local es=0 00:37:38.920 12:59:58 -- common/autotest_common.sh@642 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:864ae4a3-cd96-439b-8ef2-6dab4d992115 --hostid=864ae4a3-cd96-439b-8ef2-6dab4d992115 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:37:38.920 12:59:58 -- common/autotest_common.sh@628 -- # local arg=nvme 00:37:38.920 12:59:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:37:38.920 12:59:58 -- common/autotest_common.sh@632 -- # type -t nvme 00:37:38.920 12:59:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:37:38.920 12:59:58 -- common/autotest_common.sh@634 -- # type -P nvme 00:37:38.920 12:59:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:37:38.920 12:59:58 -- common/autotest_common.sh@634 -- # arg=/usr/sbin/nvme 00:37:38.920 12:59:58 -- common/autotest_common.sh@634 -- # [[ -x /usr/sbin/nvme ]] 00:37:38.920 12:59:58 -- common/autotest_common.sh@643 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:864ae4a3-cd96-439b-8ef2-6dab4d992115 --hostid=864ae4a3-cd96-439b-8ef2-6dab4d992115 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:37:38.920 [2024-07-22 12:59:58.187257] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:864ae4a3-cd96-439b-8ef2-6dab4d992115' 00:37:38.920 Failed to write to /dev/nvme-fabrics: Input/output error 00:37:38.920 could not add new controller: failed to write to nvme-fabrics device 00:37:38.920 12:59:58 -- common/autotest_common.sh@643 -- # es=1 00:37:38.920 12:59:58 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:37:38.920 12:59:58 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:37:38.920 12:59:58 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:37:38.920 12:59:58 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:37:38.920 12:59:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:38.920 12:59:58 -- common/autotest_common.sh@10 -- # set +x 00:37:38.920 12:59:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:38.920 12:59:58 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:864ae4a3-cd96-439b-8ef2-6dab4d992115 --hostid=864ae4a3-cd96-439b-8ef2-6dab4d992115 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:37:39.179 12:59:58 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:37:39.179 12:59:58 -- common/autotest_common.sh@1177 -- # local i=0 00:37:39.179 12:59:58 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:37:39.179 12:59:58 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:37:39.179 12:59:58 -- common/autotest_common.sh@1184 -- # sleep 2 00:37:41.107 13:00:00 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:37:41.107 13:00:00 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:37:41.107 13:00:00 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:37:41.107 13:00:00 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:37:41.107 13:00:00 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:37:41.107 13:00:00 -- common/autotest_common.sh@1187 -- # return 0 00:37:41.107 13:00:00 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:37:41.107 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:37:41.107 13:00:00 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:37:41.107 13:00:00 -- common/autotest_common.sh@1198 -- # local i=0 00:37:41.107 13:00:00 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:37:41.107 13:00:00 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:37:41.107 13:00:00 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:37:41.107 13:00:00 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:37:41.107 13:00:00 -- common/autotest_common.sh@1210 -- # return 0 00:37:41.107 13:00:00 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:41.107 13:00:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:41.107 13:00:00 -- common/autotest_common.sh@10 -- # set +x 00:37:41.107 13:00:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:41.107 13:00:00 -- target/rpc.sh@81 -- # seq 1 5 00:37:41.107 13:00:00 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:37:41.107 13:00:00 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:37:41.107 13:00:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:41.107 13:00:00 -- common/autotest_common.sh@10 -- # set +x 00:37:41.107 13:00:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:41.107 13:00:00 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:41.107 13:00:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:41.107 13:00:00 -- common/autotest_common.sh@10 -- # set +x 00:37:41.107 [2024-07-22 13:00:00.485308] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:41.107 13:00:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:41.107 13:00:00 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:37:41.107 13:00:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:41.107 13:00:00 -- common/autotest_common.sh@10 -- # set +x 00:37:41.107 13:00:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:41.107 13:00:00 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:37:41.107 13:00:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:41.107 13:00:00 -- common/autotest_common.sh@10 -- # set +x 00:37:41.107 13:00:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:41.107 13:00:00 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:864ae4a3-cd96-439b-8ef2-6dab4d992115 --hostid=864ae4a3-cd96-439b-8ef2-6dab4d992115 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:37:41.366 13:00:00 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:37:41.366 13:00:00 -- common/autotest_common.sh@1177 -- # local i=0 00:37:41.366 13:00:00 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:37:41.366 13:00:00 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:37:41.366 13:00:00 -- common/autotest_common.sh@1184 -- # sleep 2 00:37:43.266 13:00:02 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:37:43.266 13:00:02 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:37:43.266 13:00:02 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:37:43.524 13:00:02 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:37:43.524 13:00:02 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:37:43.524 13:00:02 -- common/autotest_common.sh@1187 -- # return 0 00:37:43.524 13:00:02 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:37:43.524 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:37:43.524 13:00:02 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:37:43.524 13:00:02 -- common/autotest_common.sh@1198 -- # local i=0 00:37:43.524 13:00:02 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:37:43.524 13:00:02 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:37:43.524 13:00:02 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:37:43.524 13:00:02 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:37:43.524 13:00:02 -- common/autotest_common.sh@1210 -- # return 0 00:37:43.524 13:00:02 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:43.524 13:00:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:43.524 13:00:02 -- common/autotest_common.sh@10 -- # set +x 00:37:43.524 13:00:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:43.524 13:00:02 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:43.524 13:00:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:43.524 13:00:02 -- common/autotest_common.sh@10 -- # set +x 00:37:43.524 13:00:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:43.524 13:00:02 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:37:43.524 13:00:02 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:37:43.524 13:00:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:43.524 13:00:02 -- common/autotest_common.sh@10 -- # set +x 00:37:43.524 13:00:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:43.524 13:00:02 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:43.524 13:00:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:43.524 13:00:02 -- common/autotest_common.sh@10 -- # set +x 00:37:43.524 [2024-07-22 13:00:02.777735] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:43.524 13:00:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:43.524 13:00:02 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:37:43.524 13:00:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:43.524 13:00:02 -- common/autotest_common.sh@10 -- # set +x 00:37:43.524 13:00:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:43.524 13:00:02 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:37:43.524 13:00:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:43.524 13:00:02 -- common/autotest_common.sh@10 -- # set +x 00:37:43.524 13:00:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:43.524 13:00:02 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:864ae4a3-cd96-439b-8ef2-6dab4d992115 --hostid=864ae4a3-cd96-439b-8ef2-6dab4d992115 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:37:43.792 13:00:02 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:37:43.792 13:00:02 -- common/autotest_common.sh@1177 -- # local i=0 00:37:43.792 13:00:02 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:37:43.792 13:00:02 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:37:43.792 13:00:02 -- common/autotest_common.sh@1184 -- # sleep 2 00:37:45.708 13:00:04 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:37:45.709 13:00:04 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:37:45.709 13:00:04 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:37:45.709 13:00:04 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:37:45.709 13:00:04 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:37:45.709 13:00:04 -- common/autotest_common.sh@1187 -- # return 0 00:37:45.709 13:00:04 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:37:45.709 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:37:45.709 13:00:05 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:37:45.709 13:00:05 -- common/autotest_common.sh@1198 -- # local i=0 00:37:45.709 13:00:05 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:37:45.709 13:00:05 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:37:45.967 13:00:05 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:37:45.967 13:00:05 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:37:45.967 13:00:05 -- common/autotest_common.sh@1210 -- # return 0 00:37:45.967 13:00:05 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:45.967 13:00:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:45.967 13:00:05 -- common/autotest_common.sh@10 -- # set +x 00:37:45.967 13:00:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:45.967 13:00:05 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:45.967 13:00:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:45.967 13:00:05 -- common/autotest_common.sh@10 -- # set +x 00:37:45.967 13:00:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:45.967 13:00:05 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:37:45.967 13:00:05 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:37:45.967 13:00:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:45.967 13:00:05 -- common/autotest_common.sh@10 -- # set +x 00:37:45.967 13:00:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:45.967 13:00:05 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:45.967 13:00:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:45.967 13:00:05 -- common/autotest_common.sh@10 -- # set +x 00:37:45.967 [2024-07-22 13:00:05.173574] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:45.967 13:00:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:45.967 13:00:05 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:37:45.967 13:00:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:45.967 13:00:05 -- common/autotest_common.sh@10 -- # set +x 00:37:45.967 13:00:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:45.967 13:00:05 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:37:45.967 13:00:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:45.967 13:00:05 -- common/autotest_common.sh@10 -- # set +x 00:37:45.967 13:00:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:45.967 13:00:05 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:864ae4a3-cd96-439b-8ef2-6dab4d992115 --hostid=864ae4a3-cd96-439b-8ef2-6dab4d992115 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:37:45.967 13:00:05 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:37:45.967 13:00:05 -- common/autotest_common.sh@1177 -- # local i=0 00:37:45.967 13:00:05 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:37:45.967 13:00:05 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:37:45.967 13:00:05 -- common/autotest_common.sh@1184 -- # sleep 2 00:37:48.495 13:00:07 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:37:48.495 13:00:07 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:37:48.495 13:00:07 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:37:48.495 13:00:07 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:37:48.495 13:00:07 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:37:48.495 13:00:07 -- common/autotest_common.sh@1187 -- # return 0 00:37:48.495 13:00:07 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:37:48.495 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:37:48.495 13:00:07 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:37:48.495 13:00:07 -- common/autotest_common.sh@1198 -- # local i=0 00:37:48.495 13:00:07 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:37:48.495 13:00:07 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:37:48.495 13:00:07 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:37:48.495 13:00:07 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:37:48.495 13:00:07 -- common/autotest_common.sh@1210 -- # return 0 00:37:48.495 13:00:07 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:48.495 13:00:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:48.495 13:00:07 -- common/autotest_common.sh@10 -- # set +x 00:37:48.495 13:00:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:48.495 13:00:07 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:48.495 13:00:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:48.495 13:00:07 -- common/autotest_common.sh@10 -- # set +x 00:37:48.495 13:00:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:48.495 13:00:07 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:37:48.495 13:00:07 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:37:48.495 13:00:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:48.495 13:00:07 -- common/autotest_common.sh@10 -- # set +x 00:37:48.495 13:00:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:48.495 13:00:07 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:48.495 13:00:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:48.495 13:00:07 -- common/autotest_common.sh@10 -- # set +x 00:37:48.495 [2024-07-22 13:00:07.472816] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:48.495 13:00:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:48.495 13:00:07 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:37:48.495 13:00:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:48.495 13:00:07 -- common/autotest_common.sh@10 -- # set +x 00:37:48.495 13:00:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:48.495 13:00:07 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:37:48.495 13:00:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:48.495 13:00:07 -- common/autotest_common.sh@10 -- # set +x 00:37:48.495 13:00:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:48.495 13:00:07 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:864ae4a3-cd96-439b-8ef2-6dab4d992115 --hostid=864ae4a3-cd96-439b-8ef2-6dab4d992115 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:37:48.495 13:00:07 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:37:48.495 13:00:07 -- common/autotest_common.sh@1177 -- # local i=0 00:37:48.495 13:00:07 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:37:48.495 13:00:07 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:37:48.495 13:00:07 -- common/autotest_common.sh@1184 -- # sleep 2 00:37:50.410 13:00:09 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:37:50.410 13:00:09 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:37:50.410 13:00:09 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:37:50.410 13:00:09 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:37:50.410 13:00:09 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:37:50.410 13:00:09 -- common/autotest_common.sh@1187 -- # return 0 00:37:50.410 13:00:09 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:37:50.410 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:37:50.410 13:00:09 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:37:50.410 13:00:09 -- common/autotest_common.sh@1198 -- # local i=0 00:37:50.410 13:00:09 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:37:50.410 13:00:09 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:37:50.410 13:00:09 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:37:50.410 13:00:09 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:37:50.410 13:00:09 -- common/autotest_common.sh@1210 -- # return 0 00:37:50.410 13:00:09 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:50.410 13:00:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:50.410 13:00:09 -- common/autotest_common.sh@10 -- # set +x 00:37:50.410 13:00:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:50.410 13:00:09 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:50.410 13:00:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:50.410 13:00:09 -- common/autotest_common.sh@10 -- # set +x 00:37:50.410 13:00:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:50.410 13:00:09 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:37:50.410 13:00:09 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:37:50.410 13:00:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:50.410 13:00:09 -- common/autotest_common.sh@10 -- # set +x 00:37:50.410 13:00:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:50.410 13:00:09 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:50.410 13:00:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:50.410 13:00:09 -- common/autotest_common.sh@10 -- # set +x 00:37:50.410 [2024-07-22 13:00:09.788192] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:50.410 13:00:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:50.410 13:00:09 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:37:50.410 13:00:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:50.410 13:00:09 -- common/autotest_common.sh@10 -- # set +x 00:37:50.410 13:00:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:50.410 13:00:09 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:37:50.410 13:00:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:50.410 13:00:09 -- common/autotest_common.sh@10 -- # set +x 00:37:50.410 13:00:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:50.410 13:00:09 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:864ae4a3-cd96-439b-8ef2-6dab4d992115 --hostid=864ae4a3-cd96-439b-8ef2-6dab4d992115 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:37:50.668 13:00:09 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:37:50.668 13:00:09 -- common/autotest_common.sh@1177 -- # local i=0 00:37:50.668 13:00:09 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:37:50.668 13:00:09 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:37:50.668 13:00:09 -- common/autotest_common.sh@1184 -- # sleep 2 00:37:52.569 13:00:11 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:37:52.569 13:00:11 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:37:52.569 13:00:11 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:37:52.828 13:00:11 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:37:52.828 13:00:11 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:37:52.828 13:00:11 -- common/autotest_common.sh@1187 -- # return 0 00:37:52.828 13:00:11 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:37:52.828 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:37:52.828 13:00:12 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:37:52.828 13:00:12 -- common/autotest_common.sh@1198 -- # local i=0 00:37:52.828 13:00:12 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:37:52.828 13:00:12 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:37:52.828 13:00:12 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:37:52.828 13:00:12 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:37:52.828 13:00:12 -- common/autotest_common.sh@1210 -- # return 0 00:37:52.828 13:00:12 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:52.828 13:00:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:52.828 13:00:12 -- common/autotest_common.sh@10 -- # set +x 00:37:52.828 13:00:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:52.828 13:00:12 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:52.828 13:00:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:52.828 13:00:12 -- common/autotest_common.sh@10 -- # set +x 00:37:52.828 13:00:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:52.828 13:00:12 -- target/rpc.sh@99 -- # seq 1 5 00:37:52.828 13:00:12 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:37:52.828 13:00:12 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:37:52.828 13:00:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:52.828 13:00:12 -- common/autotest_common.sh@10 -- # set +x 00:37:52.828 13:00:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:52.828 13:00:12 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:52.828 13:00:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:52.828 13:00:12 -- common/autotest_common.sh@10 -- # set +x 00:37:52.829 [2024-07-22 13:00:12.091344] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:52.829 13:00:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:52.829 13:00:12 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:37:52.829 13:00:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:52.829 13:00:12 -- common/autotest_common.sh@10 -- # set +x 00:37:52.829 13:00:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:52.829 13:00:12 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:37:52.829 13:00:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:52.829 13:00:12 -- common/autotest_common.sh@10 -- # set +x 00:37:52.829 13:00:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:52.829 13:00:12 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:52.829 13:00:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:52.829 13:00:12 -- common/autotest_common.sh@10 -- # set +x 00:37:52.829 13:00:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:52.829 13:00:12 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:52.829 13:00:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:52.829 13:00:12 -- common/autotest_common.sh@10 -- # set +x 00:37:52.829 13:00:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:52.829 13:00:12 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:37:52.829 13:00:12 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:37:52.829 13:00:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:52.829 13:00:12 -- common/autotest_common.sh@10 -- # set +x 00:37:52.829 13:00:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:52.829 13:00:12 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:52.829 13:00:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:52.829 13:00:12 -- common/autotest_common.sh@10 -- # set +x 00:37:52.829 [2024-07-22 13:00:12.151407] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:52.829 13:00:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:52.829 13:00:12 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:37:52.829 13:00:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:52.829 13:00:12 -- common/autotest_common.sh@10 -- # set +x 00:37:52.829 13:00:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:52.829 13:00:12 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:37:52.829 13:00:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:52.829 13:00:12 -- common/autotest_common.sh@10 -- # set +x 00:37:52.829 13:00:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:52.829 13:00:12 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:52.829 13:00:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:52.829 13:00:12 -- common/autotest_common.sh@10 -- # set +x 00:37:52.829 13:00:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:52.829 13:00:12 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:52.829 13:00:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:52.829 13:00:12 -- common/autotest_common.sh@10 -- # set +x 00:37:52.829 13:00:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:52.829 13:00:12 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:37:52.829 13:00:12 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:37:52.829 13:00:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:52.829 13:00:12 -- common/autotest_common.sh@10 -- # set +x 00:37:52.829 13:00:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:52.829 13:00:12 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:52.829 13:00:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:52.829 13:00:12 -- common/autotest_common.sh@10 -- # set +x 00:37:52.829 [2024-07-22 13:00:12.207479] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:52.829 13:00:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:52.829 13:00:12 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:37:52.829 13:00:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:52.829 13:00:12 -- common/autotest_common.sh@10 -- # set +x 00:37:52.829 13:00:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:52.829 13:00:12 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:37:52.829 13:00:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:52.829 13:00:12 -- common/autotest_common.sh@10 -- # set +x 00:37:52.829 13:00:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:52.829 13:00:12 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:52.829 13:00:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:52.829 13:00:12 -- common/autotest_common.sh@10 -- # set +x 00:37:52.829 13:00:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:52.829 13:00:12 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:52.829 13:00:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:52.829 13:00:12 -- common/autotest_common.sh@10 -- # set +x 00:37:52.829 13:00:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:52.829 13:00:12 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:37:52.829 13:00:12 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:37:52.829 13:00:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:52.829 13:00:12 -- common/autotest_common.sh@10 -- # set +x 00:37:53.088 13:00:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:53.088 13:00:12 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:53.088 13:00:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:53.088 13:00:12 -- common/autotest_common.sh@10 -- # set +x 00:37:53.088 [2024-07-22 13:00:12.263517] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:53.088 13:00:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:53.088 13:00:12 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:37:53.088 13:00:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:53.088 13:00:12 -- common/autotest_common.sh@10 -- # set +x 00:37:53.088 13:00:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:53.088 13:00:12 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:37:53.088 13:00:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:53.088 13:00:12 -- common/autotest_common.sh@10 -- # set +x 00:37:53.088 13:00:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:53.088 13:00:12 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:53.088 13:00:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:53.088 13:00:12 -- common/autotest_common.sh@10 -- # set +x 00:37:53.088 13:00:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:53.088 13:00:12 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:53.088 13:00:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:53.088 13:00:12 -- common/autotest_common.sh@10 -- # set +x 00:37:53.088 13:00:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:53.088 13:00:12 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:37:53.088 13:00:12 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:37:53.088 13:00:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:53.088 13:00:12 -- common/autotest_common.sh@10 -- # set +x 00:37:53.088 13:00:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:53.088 13:00:12 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:53.088 13:00:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:53.088 13:00:12 -- common/autotest_common.sh@10 -- # set +x 00:37:53.088 [2024-07-22 13:00:12.319574] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:53.088 13:00:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:53.088 13:00:12 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:37:53.088 13:00:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:53.088 13:00:12 -- common/autotest_common.sh@10 -- # set +x 00:37:53.088 13:00:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:53.088 13:00:12 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:37:53.088 13:00:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:53.088 13:00:12 -- common/autotest_common.sh@10 -- # set +x 00:37:53.088 13:00:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:53.088 13:00:12 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:53.088 13:00:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:53.088 13:00:12 -- common/autotest_common.sh@10 -- # set +x 00:37:53.088 13:00:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:53.088 13:00:12 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:53.088 13:00:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:53.088 13:00:12 -- common/autotest_common.sh@10 -- # set +x 00:37:53.088 13:00:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:53.088 13:00:12 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:37:53.088 13:00:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:53.088 13:00:12 -- common/autotest_common.sh@10 -- # set +x 00:37:53.088 13:00:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:53.088 13:00:12 -- target/rpc.sh@110 -- # stats='{ 00:37:53.088 "poll_groups": [ 00:37:53.088 { 00:37:53.088 "admin_qpairs": 2, 00:37:53.088 "completed_nvme_io": 66, 00:37:53.088 "current_admin_qpairs": 0, 00:37:53.088 "current_io_qpairs": 0, 00:37:53.088 "io_qpairs": 16, 00:37:53.088 "name": "nvmf_tgt_poll_group_0", 00:37:53.088 "pending_bdev_io": 0, 00:37:53.088 "transports": [ 00:37:53.088 { 00:37:53.088 "trtype": "TCP" 00:37:53.088 } 00:37:53.088 ] 00:37:53.088 }, 00:37:53.088 { 00:37:53.088 "admin_qpairs": 3, 00:37:53.088 "completed_nvme_io": 116, 00:37:53.088 "current_admin_qpairs": 0, 00:37:53.088 "current_io_qpairs": 0, 00:37:53.088 "io_qpairs": 17, 00:37:53.088 "name": "nvmf_tgt_poll_group_1", 00:37:53.088 "pending_bdev_io": 0, 00:37:53.088 "transports": [ 00:37:53.088 { 00:37:53.088 "trtype": "TCP" 00:37:53.088 } 00:37:53.088 ] 00:37:53.088 }, 00:37:53.088 { 00:37:53.088 "admin_qpairs": 1, 00:37:53.088 "completed_nvme_io": 169, 00:37:53.088 "current_admin_qpairs": 0, 00:37:53.088 "current_io_qpairs": 0, 00:37:53.088 "io_qpairs": 19, 00:37:53.088 "name": "nvmf_tgt_poll_group_2", 00:37:53.088 "pending_bdev_io": 0, 00:37:53.088 "transports": [ 00:37:53.088 { 00:37:53.088 "trtype": "TCP" 00:37:53.088 } 00:37:53.088 ] 00:37:53.088 }, 00:37:53.088 { 00:37:53.088 "admin_qpairs": 1, 00:37:53.088 "completed_nvme_io": 69, 00:37:53.088 "current_admin_qpairs": 0, 00:37:53.088 "current_io_qpairs": 0, 00:37:53.088 "io_qpairs": 18, 00:37:53.088 "name": "nvmf_tgt_poll_group_3", 00:37:53.088 "pending_bdev_io": 0, 00:37:53.088 "transports": [ 00:37:53.088 { 00:37:53.088 "trtype": "TCP" 00:37:53.088 } 00:37:53.088 ] 00:37:53.088 } 00:37:53.088 ], 00:37:53.088 "tick_rate": 2200000000 00:37:53.088 }' 00:37:53.088 13:00:12 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:37:53.088 13:00:12 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:37:53.088 13:00:12 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:37:53.088 13:00:12 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:37:53.088 13:00:12 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:37:53.088 13:00:12 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:37:53.088 13:00:12 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:37:53.088 13:00:12 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:37:53.088 13:00:12 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:37:53.088 13:00:12 -- target/rpc.sh@113 -- # (( 70 > 0 )) 00:37:53.088 13:00:12 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:37:53.088 13:00:12 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:37:53.088 13:00:12 -- target/rpc.sh@123 -- # nvmftestfini 00:37:53.088 13:00:12 -- nvmf/common.sh@476 -- # nvmfcleanup 00:37:53.088 13:00:12 -- nvmf/common.sh@116 -- # sync 00:37:53.346 13:00:12 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:37:53.346 13:00:12 -- nvmf/common.sh@119 -- # set +e 00:37:53.346 13:00:12 -- nvmf/common.sh@120 -- # for i in {1..20} 00:37:53.346 13:00:12 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:37:53.346 rmmod nvme_tcp 00:37:53.346 rmmod nvme_fabrics 00:37:53.346 rmmod nvme_keyring 00:37:53.346 13:00:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:37:53.346 13:00:12 -- nvmf/common.sh@123 -- # set -e 00:37:53.346 13:00:12 -- nvmf/common.sh@124 -- # return 0 00:37:53.346 13:00:12 -- nvmf/common.sh@477 -- # '[' -n 77446 ']' 00:37:53.346 13:00:12 -- nvmf/common.sh@478 -- # killprocess 77446 00:37:53.346 13:00:12 -- common/autotest_common.sh@926 -- # '[' -z 77446 ']' 00:37:53.346 13:00:12 -- common/autotest_common.sh@930 -- # kill -0 77446 00:37:53.346 13:00:12 -- common/autotest_common.sh@931 -- # uname 00:37:53.346 13:00:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:37:53.346 13:00:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 77446 00:37:53.346 killing process with pid 77446 00:37:53.346 13:00:12 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:37:53.346 13:00:12 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:37:53.346 13:00:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 77446' 00:37:53.346 13:00:12 -- common/autotest_common.sh@945 -- # kill 77446 00:37:53.346 13:00:12 -- common/autotest_common.sh@950 -- # wait 77446 00:37:53.605 13:00:12 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:37:53.605 13:00:12 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:37:53.605 13:00:12 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:37:53.605 13:00:12 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:37:53.605 13:00:12 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:37:53.605 13:00:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:53.605 13:00:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:53.605 13:00:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:53.605 13:00:12 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:37:53.605 00:37:53.605 real 0m18.995s 00:37:53.605 user 1m11.930s 00:37:53.605 sys 0m2.366s 00:37:53.605 13:00:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:53.605 13:00:12 -- common/autotest_common.sh@10 -- # set +x 00:37:53.605 ************************************ 00:37:53.605 END TEST nvmf_rpc 00:37:53.605 ************************************ 00:37:53.605 13:00:12 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:37:53.605 13:00:12 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:37:53.605 13:00:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:37:53.605 13:00:12 -- common/autotest_common.sh@10 -- # set +x 00:37:53.605 ************************************ 00:37:53.605 START TEST nvmf_invalid 00:37:53.605 ************************************ 00:37:53.605 13:00:12 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:37:53.605 * Looking for test storage... 00:37:53.605 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:37:53.605 13:00:12 -- target/invalid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:37:53.605 13:00:13 -- nvmf/common.sh@7 -- # uname -s 00:37:53.605 13:00:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:53.605 13:00:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:53.605 13:00:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:53.605 13:00:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:53.605 13:00:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:53.605 13:00:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:53.605 13:00:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:53.605 13:00:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:53.605 13:00:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:53.605 13:00:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:53.605 13:00:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:864ae4a3-cd96-439b-8ef2-6dab4d992115 00:37:53.605 13:00:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=864ae4a3-cd96-439b-8ef2-6dab4d992115 00:37:53.605 13:00:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:53.605 13:00:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:53.605 13:00:13 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:37:53.605 13:00:13 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:37:53.605 13:00:13 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:53.605 13:00:13 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:53.605 13:00:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:53.605 13:00:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:53.605 13:00:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:53.605 13:00:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:53.605 13:00:13 -- paths/export.sh@5 -- # export PATH 00:37:53.605 13:00:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:53.605 13:00:13 -- nvmf/common.sh@46 -- # : 0 00:37:53.605 13:00:13 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:37:53.605 13:00:13 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:37:53.605 13:00:13 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:37:53.605 13:00:13 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:53.605 13:00:13 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:53.605 13:00:13 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:37:53.605 13:00:13 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:37:53.605 13:00:13 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:37:53.605 13:00:13 -- target/invalid.sh@11 -- # multi_target_rpc=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:37:53.605 13:00:13 -- target/invalid.sh@12 -- # rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:37:53.605 13:00:13 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:37:53.605 13:00:13 -- target/invalid.sh@14 -- # target=foobar 00:37:53.605 13:00:13 -- target/invalid.sh@16 -- # RANDOM=0 00:37:53.605 13:00:13 -- target/invalid.sh@34 -- # nvmftestinit 00:37:53.605 13:00:13 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:37:53.605 13:00:13 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:53.605 13:00:13 -- nvmf/common.sh@436 -- # prepare_net_devs 00:37:53.605 13:00:13 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:37:53.605 13:00:13 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:37:53.605 13:00:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:53.605 13:00:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:53.605 13:00:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:53.605 13:00:13 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:37:53.605 13:00:13 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:37:53.863 13:00:13 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:37:53.863 13:00:13 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:37:53.863 13:00:13 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:37:53.863 13:00:13 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:37:53.863 13:00:13 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:53.863 13:00:13 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:53.863 13:00:13 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:37:53.863 13:00:13 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:37:53.863 13:00:13 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:37:53.863 13:00:13 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:37:53.863 13:00:13 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:37:53.863 13:00:13 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:53.863 13:00:13 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:37:53.863 13:00:13 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:37:53.863 13:00:13 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:37:53.863 13:00:13 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:37:53.863 13:00:13 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:37:53.863 13:00:13 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:37:53.863 Cannot find device "nvmf_tgt_br" 00:37:53.863 13:00:13 -- nvmf/common.sh@154 -- # true 00:37:53.863 13:00:13 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:37:53.863 Cannot find device "nvmf_tgt_br2" 00:37:53.863 13:00:13 -- nvmf/common.sh@155 -- # true 00:37:53.863 13:00:13 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:37:53.863 13:00:13 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:37:53.863 Cannot find device "nvmf_tgt_br" 00:37:53.863 13:00:13 -- nvmf/common.sh@157 -- # true 00:37:53.863 13:00:13 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:37:53.863 Cannot find device "nvmf_tgt_br2" 00:37:53.863 13:00:13 -- nvmf/common.sh@158 -- # true 00:37:53.863 13:00:13 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:37:53.863 13:00:13 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:37:53.863 13:00:13 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:37:53.863 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:37:53.863 13:00:13 -- nvmf/common.sh@161 -- # true 00:37:53.863 13:00:13 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:37:53.863 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:37:53.863 13:00:13 -- nvmf/common.sh@162 -- # true 00:37:53.863 13:00:13 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:37:53.863 13:00:13 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:37:53.863 13:00:13 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:37:53.863 13:00:13 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:37:53.863 13:00:13 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:37:53.863 13:00:13 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:37:53.863 13:00:13 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:37:53.863 13:00:13 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:37:53.863 13:00:13 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:37:53.863 13:00:13 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:37:53.863 13:00:13 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:37:53.863 13:00:13 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:37:53.863 13:00:13 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:37:53.863 13:00:13 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:37:53.863 13:00:13 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:37:53.863 13:00:13 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:37:54.128 13:00:13 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:37:54.128 13:00:13 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:37:54.128 13:00:13 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:37:54.128 13:00:13 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:37:54.128 13:00:13 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:37:54.128 13:00:13 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:37:54.128 13:00:13 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:37:54.128 13:00:13 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:37:54.128 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:54.128 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:37:54.128 00:37:54.128 --- 10.0.0.2 ping statistics --- 00:37:54.128 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:54.128 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:37:54.128 13:00:13 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:37:54.128 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:37:54.128 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:37:54.128 00:37:54.128 --- 10.0.0.3 ping statistics --- 00:37:54.128 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:54.128 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:37:54.128 13:00:13 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:37:54.128 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:54.128 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:37:54.128 00:37:54.128 --- 10.0.0.1 ping statistics --- 00:37:54.128 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:54.128 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:37:54.128 13:00:13 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:54.128 13:00:13 -- nvmf/common.sh@421 -- # return 0 00:37:54.128 13:00:13 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:37:54.128 13:00:13 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:54.128 13:00:13 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:37:54.128 13:00:13 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:37:54.128 13:00:13 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:54.128 13:00:13 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:37:54.128 13:00:13 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:37:54.128 13:00:13 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:37:54.128 13:00:13 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:37:54.128 13:00:13 -- common/autotest_common.sh@712 -- # xtrace_disable 00:37:54.128 13:00:13 -- common/autotest_common.sh@10 -- # set +x 00:37:54.128 13:00:13 -- nvmf/common.sh@469 -- # nvmfpid=77962 00:37:54.128 13:00:13 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:37:54.128 13:00:13 -- nvmf/common.sh@470 -- # waitforlisten 77962 00:37:54.128 13:00:13 -- common/autotest_common.sh@819 -- # '[' -z 77962 ']' 00:37:54.128 13:00:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:54.128 13:00:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:37:54.129 13:00:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:54.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:54.129 13:00:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:37:54.129 13:00:13 -- common/autotest_common.sh@10 -- # set +x 00:37:54.129 [2024-07-22 13:00:13.433852] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:37:54.129 [2024-07-22 13:00:13.433947] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:54.392 [2024-07-22 13:00:13.573132] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:54.392 [2024-07-22 13:00:13.668912] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:37:54.392 [2024-07-22 13:00:13.669068] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:54.392 [2024-07-22 13:00:13.669083] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:54.392 [2024-07-22 13:00:13.669092] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:54.392 [2024-07-22 13:00:13.669202] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:37:54.392 [2024-07-22 13:00:13.669639] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:37:54.392 [2024-07-22 13:00:13.669744] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:37:54.392 [2024-07-22 13:00:13.669753] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:55.327 13:00:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:37:55.327 13:00:14 -- common/autotest_common.sh@852 -- # return 0 00:37:55.327 13:00:14 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:37:55.327 13:00:14 -- common/autotest_common.sh@718 -- # xtrace_disable 00:37:55.327 13:00:14 -- common/autotest_common.sh@10 -- # set +x 00:37:55.327 13:00:14 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:55.327 13:00:14 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:37:55.327 13:00:14 -- target/invalid.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode12581 00:37:55.327 [2024-07-22 13:00:14.681882] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:37:55.327 13:00:14 -- target/invalid.sh@40 -- # out='2024/07/22 13:00:14 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode12581 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:37:55.327 request: 00:37:55.327 { 00:37:55.327 "method": "nvmf_create_subsystem", 00:37:55.327 "params": { 00:37:55.327 "nqn": "nqn.2016-06.io.spdk:cnode12581", 00:37:55.327 "tgt_name": "foobar" 00:37:55.327 } 00:37:55.327 } 00:37:55.327 Got JSON-RPC error response 00:37:55.327 GoRPCClient: error on JSON-RPC call' 00:37:55.327 13:00:14 -- target/invalid.sh@41 -- # [[ 2024/07/22 13:00:14 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode12581 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:37:55.327 request: 00:37:55.327 { 00:37:55.327 "method": "nvmf_create_subsystem", 00:37:55.327 "params": { 00:37:55.327 "nqn": "nqn.2016-06.io.spdk:cnode12581", 00:37:55.327 "tgt_name": "foobar" 00:37:55.327 } 00:37:55.327 } 00:37:55.327 Got JSON-RPC error response 00:37:55.327 GoRPCClient: error on JSON-RPC call == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:37:55.327 13:00:14 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:37:55.328 13:00:14 -- target/invalid.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode24641 00:37:55.586 [2024-07-22 13:00:14.954148] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24641: invalid serial number 'SPDKISFASTANDAWESOME' 00:37:55.586 13:00:14 -- target/invalid.sh@45 -- # out='2024/07/22 13:00:14 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode24641 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:37:55.586 request: 00:37:55.586 { 00:37:55.586 "method": "nvmf_create_subsystem", 00:37:55.586 "params": { 00:37:55.586 "nqn": "nqn.2016-06.io.spdk:cnode24641", 00:37:55.586 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:37:55.586 } 00:37:55.586 } 00:37:55.586 Got JSON-RPC error response 00:37:55.586 GoRPCClient: error on JSON-RPC call' 00:37:55.586 13:00:14 -- target/invalid.sh@46 -- # [[ 2024/07/22 13:00:14 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode24641 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:37:55.586 request: 00:37:55.586 { 00:37:55.586 "method": "nvmf_create_subsystem", 00:37:55.586 "params": { 00:37:55.586 "nqn": "nqn.2016-06.io.spdk:cnode24641", 00:37:55.586 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:37:55.586 } 00:37:55.586 } 00:37:55.586 Got JSON-RPC error response 00:37:55.586 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:37:55.586 13:00:14 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:37:55.586 13:00:14 -- target/invalid.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode18979 00:37:55.843 [2024-07-22 13:00:15.254394] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18979: invalid model number 'SPDK_Controller' 00:37:56.102 13:00:15 -- target/invalid.sh@50 -- # out='2024/07/22 13:00:15 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode18979], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:37:56.102 request: 00:37:56.102 { 00:37:56.102 "method": "nvmf_create_subsystem", 00:37:56.102 "params": { 00:37:56.102 "nqn": "nqn.2016-06.io.spdk:cnode18979", 00:37:56.102 "model_number": "SPDK_Controller\u001f" 00:37:56.102 } 00:37:56.102 } 00:37:56.102 Got JSON-RPC error response 00:37:56.102 GoRPCClient: error on JSON-RPC call' 00:37:56.102 13:00:15 -- target/invalid.sh@51 -- # [[ 2024/07/22 13:00:15 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode18979], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:37:56.102 request: 00:37:56.102 { 00:37:56.102 "method": "nvmf_create_subsystem", 00:37:56.102 "params": { 00:37:56.102 "nqn": "nqn.2016-06.io.spdk:cnode18979", 00:37:56.102 "model_number": "SPDK_Controller\u001f" 00:37:56.102 } 00:37:56.102 } 00:37:56.102 Got JSON-RPC error response 00:37:56.102 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:37:56.102 13:00:15 -- target/invalid.sh@54 -- # gen_random_s 21 00:37:56.102 13:00:15 -- target/invalid.sh@19 -- # local length=21 ll 00:37:56.102 13:00:15 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:37:56.102 13:00:15 -- target/invalid.sh@21 -- # local chars 00:37:56.102 13:00:15 -- target/invalid.sh@22 -- # local string 00:37:56.102 13:00:15 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:37:56.102 13:00:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:37:56.102 13:00:15 -- target/invalid.sh@25 -- # printf %x 76 00:37:56.102 13:00:15 -- target/invalid.sh@25 -- # echo -e '\x4c' 00:37:56.102 13:00:15 -- target/invalid.sh@25 -- # string+=L 00:37:56.102 13:00:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:37:56.102 13:00:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:37:56.102 13:00:15 -- target/invalid.sh@25 -- # printf %x 112 00:37:56.102 13:00:15 -- target/invalid.sh@25 -- # echo -e '\x70' 00:37:56.102 13:00:15 -- target/invalid.sh@25 -- # string+=p 00:37:56.102 13:00:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:37:56.102 13:00:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:37:56.102 13:00:15 -- target/invalid.sh@25 -- # printf %x 70 00:37:56.102 13:00:15 -- target/invalid.sh@25 -- # echo -e '\x46' 00:37:56.102 13:00:15 -- target/invalid.sh@25 -- # string+=F 00:37:56.102 13:00:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:37:56.102 13:00:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:37:56.102 13:00:15 -- target/invalid.sh@25 -- # printf %x 103 00:37:56.102 13:00:15 -- target/invalid.sh@25 -- # echo -e '\x67' 00:37:56.102 13:00:15 -- target/invalid.sh@25 -- # string+=g 00:37:56.102 13:00:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:37:56.102 13:00:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:37:56.102 13:00:15 -- target/invalid.sh@25 -- # printf %x 71 00:37:56.102 13:00:15 -- target/invalid.sh@25 -- # echo -e '\x47' 00:37:56.102 13:00:15 -- target/invalid.sh@25 -- # string+=G 00:37:56.102 13:00:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:37:56.102 13:00:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:37:56.102 13:00:15 -- target/invalid.sh@25 -- # printf %x 89 00:37:56.102 13:00:15 -- target/invalid.sh@25 -- # echo -e '\x59' 00:37:56.102 13:00:15 -- target/invalid.sh@25 -- # string+=Y 00:37:56.102 13:00:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:37:56.102 13:00:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:37:56.102 13:00:15 -- target/invalid.sh@25 -- # printf %x 43 00:37:56.102 13:00:15 -- target/invalid.sh@25 -- # echo -e '\x2b' 00:37:56.102 13:00:15 -- target/invalid.sh@25 -- # string+=+ 00:37:56.102 13:00:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:37:56.102 13:00:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:37:56.102 13:00:15 -- target/invalid.sh@25 -- # printf %x 76 00:37:56.102 13:00:15 -- target/invalid.sh@25 -- # echo -e '\x4c' 00:37:56.102 13:00:15 -- target/invalid.sh@25 -- # string+=L 00:37:56.102 13:00:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:37:56.102 13:00:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:37:56.102 13:00:15 -- target/invalid.sh@25 -- # printf %x 114 00:37:56.102 13:00:15 -- target/invalid.sh@25 -- # echo -e '\x72' 00:37:56.102 13:00:15 -- target/invalid.sh@25 -- # string+=r 00:37:56.102 13:00:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:37:56.102 13:00:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:37:56.102 13:00:15 -- target/invalid.sh@25 -- # printf %x 34 00:37:56.102 13:00:15 -- target/invalid.sh@25 -- # echo -e '\x22' 00:37:56.102 13:00:15 -- target/invalid.sh@25 -- # string+='"' 00:37:56.102 13:00:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:37:56.102 13:00:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:37:56.102 13:00:15 -- target/invalid.sh@25 -- # printf %x 77 00:37:56.102 13:00:15 -- target/invalid.sh@25 -- # echo -e '\x4d' 00:37:56.102 13:00:15 -- target/invalid.sh@25 -- # string+=M 00:37:56.102 13:00:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:37:56.102 13:00:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:37:56.102 13:00:15 -- target/invalid.sh@25 -- # printf %x 118 00:37:56.102 13:00:15 -- target/invalid.sh@25 -- # echo -e '\x76' 00:37:56.102 13:00:15 -- target/invalid.sh@25 -- # string+=v 00:37:56.102 13:00:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:37:56.102 13:00:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:37:56.102 13:00:15 -- target/invalid.sh@25 -- # printf %x 55 00:37:56.102 13:00:15 -- target/invalid.sh@25 -- # echo -e '\x37' 00:37:56.102 13:00:15 -- target/invalid.sh@25 -- # string+=7 00:37:56.102 13:00:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:37:56.102 13:00:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:37:56.102 13:00:15 -- target/invalid.sh@25 -- # printf %x 114 00:37:56.102 13:00:15 -- target/invalid.sh@25 -- # echo -e '\x72' 00:37:56.102 13:00:15 -- target/invalid.sh@25 -- # string+=r 00:37:56.102 13:00:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:37:56.102 13:00:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:37:56.102 13:00:15 -- target/invalid.sh@25 -- # printf %x 47 00:37:56.102 13:00:15 -- target/invalid.sh@25 -- # echo -e '\x2f' 00:37:56.102 13:00:15 -- target/invalid.sh@25 -- # string+=/ 00:37:56.102 13:00:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:37:56.102 13:00:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:37:56.102 13:00:15 -- target/invalid.sh@25 -- # printf %x 124 00:37:56.102 13:00:15 -- target/invalid.sh@25 -- # echo -e '\x7c' 00:37:56.102 13:00:15 -- target/invalid.sh@25 -- # string+='|' 00:37:56.102 13:00:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:37:56.102 13:00:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:37:56.102 13:00:15 -- target/invalid.sh@25 -- # printf %x 113 00:37:56.102 13:00:15 -- target/invalid.sh@25 -- # echo -e '\x71' 00:37:56.102 13:00:15 -- target/invalid.sh@25 -- # string+=q 00:37:56.102 13:00:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:37:56.102 13:00:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:37:56.102 13:00:15 -- target/invalid.sh@25 -- # printf %x 105 00:37:56.102 13:00:15 -- target/invalid.sh@25 -- # echo -e '\x69' 00:37:56.102 13:00:15 -- target/invalid.sh@25 -- # string+=i 00:37:56.102 13:00:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:37:56.102 13:00:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:37:56.102 13:00:15 -- target/invalid.sh@25 -- # printf %x 86 00:37:56.102 13:00:15 -- target/invalid.sh@25 -- # echo -e '\x56' 00:37:56.102 13:00:15 -- target/invalid.sh@25 -- # string+=V 00:37:56.102 13:00:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:37:56.102 13:00:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:37:56.102 13:00:15 -- target/invalid.sh@25 -- # printf %x 39 00:37:56.102 13:00:15 -- target/invalid.sh@25 -- # echo -e '\x27' 00:37:56.102 13:00:15 -- target/invalid.sh@25 -- # string+=\' 00:37:56.102 13:00:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:37:56.102 13:00:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:37:56.102 13:00:15 -- target/invalid.sh@25 -- # printf %x 95 00:37:56.102 13:00:15 -- target/invalid.sh@25 -- # echo -e '\x5f' 00:37:56.102 13:00:15 -- target/invalid.sh@25 -- # string+=_ 00:37:56.102 13:00:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:37:56.102 13:00:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:37:56.102 13:00:15 -- target/invalid.sh@28 -- # [[ L == \- ]] 00:37:56.102 13:00:15 -- target/invalid.sh@31 -- # echo 'LpFgGY+Lr"Mv7r/|qiV'\''_' 00:37:56.102 13:00:15 -- target/invalid.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s 'LpFgGY+Lr"Mv7r/|qiV'\''_' nqn.2016-06.io.spdk:cnode23263 00:37:56.361 [2024-07-22 13:00:15.626685] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23263: invalid serial number 'LpFgGY+Lr"Mv7r/|qiV'_' 00:37:56.361 13:00:15 -- target/invalid.sh@54 -- # out='2024/07/22 13:00:15 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode23263 serial_number:LpFgGY+Lr"Mv7r/|qiV'\''_], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN LpFgGY+Lr"Mv7r/|qiV'\''_ 00:37:56.361 request: 00:37:56.361 { 00:37:56.361 "method": "nvmf_create_subsystem", 00:37:56.361 "params": { 00:37:56.361 "nqn": "nqn.2016-06.io.spdk:cnode23263", 00:37:56.361 "serial_number": "LpFgGY+Lr\"Mv7r/|qiV'\''_" 00:37:56.361 } 00:37:56.361 } 00:37:56.361 Got JSON-RPC error response 00:37:56.361 GoRPCClient: error on JSON-RPC call' 00:37:56.361 13:00:15 -- target/invalid.sh@55 -- # [[ 2024/07/22 13:00:15 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode23263 serial_number:LpFgGY+Lr"Mv7r/|qiV'_], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN LpFgGY+Lr"Mv7r/|qiV'_ 00:37:56.361 request: 00:37:56.361 { 00:37:56.361 "method": "nvmf_create_subsystem", 00:37:56.361 "params": { 00:37:56.361 "nqn": "nqn.2016-06.io.spdk:cnode23263", 00:37:56.361 "serial_number": "LpFgGY+Lr\"Mv7r/|qiV'_" 00:37:56.361 } 00:37:56.361 } 00:37:56.361 Got JSON-RPC error response 00:37:56.361 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:37:56.361 13:00:15 -- target/invalid.sh@58 -- # gen_random_s 41 00:37:56.361 13:00:15 -- target/invalid.sh@19 -- # local length=41 ll 00:37:56.361 13:00:15 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:37:56.361 13:00:15 -- target/invalid.sh@21 -- # local chars 00:37:56.361 13:00:15 -- target/invalid.sh@22 -- # local string 00:37:56.361 13:00:15 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:37:56.361 13:00:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:37:56.361 13:00:15 -- target/invalid.sh@25 -- # printf %x 110 00:37:56.361 13:00:15 -- target/invalid.sh@25 -- # echo -e '\x6e' 00:37:56.361 13:00:15 -- target/invalid.sh@25 -- # string+=n 00:37:56.361 13:00:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:37:56.361 13:00:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:37:56.361 13:00:15 -- target/invalid.sh@25 -- # printf %x 81 00:37:56.361 13:00:15 -- target/invalid.sh@25 -- # echo -e '\x51' 00:37:56.361 13:00:15 -- target/invalid.sh@25 -- # string+=Q 00:37:56.361 13:00:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:37:56.361 13:00:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:37:56.361 13:00:15 -- target/invalid.sh@25 -- # printf %x 37 00:37:56.361 13:00:15 -- target/invalid.sh@25 -- # echo -e '\x25' 00:37:56.361 13:00:15 -- target/invalid.sh@25 -- # string+=% 00:37:56.361 13:00:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:37:56.361 13:00:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:37:56.361 13:00:15 -- target/invalid.sh@25 -- # printf %x 83 00:37:56.361 13:00:15 -- target/invalid.sh@25 -- # echo -e '\x53' 00:37:56.361 13:00:15 -- target/invalid.sh@25 -- # string+=S 00:37:56.361 13:00:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:37:56.361 13:00:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:37:56.361 13:00:15 -- target/invalid.sh@25 -- # printf %x 38 00:37:56.361 13:00:15 -- target/invalid.sh@25 -- # echo -e '\x26' 00:37:56.361 13:00:15 -- target/invalid.sh@25 -- # string+='&' 00:37:56.361 13:00:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:37:56.361 13:00:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:37:56.361 13:00:15 -- target/invalid.sh@25 -- # printf %x 112 00:37:56.361 13:00:15 -- target/invalid.sh@25 -- # echo -e '\x70' 00:37:56.361 13:00:15 -- target/invalid.sh@25 -- # string+=p 00:37:56.361 13:00:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:37:56.361 13:00:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:37:56.361 13:00:15 -- target/invalid.sh@25 -- # printf %x 74 00:37:56.361 13:00:15 -- target/invalid.sh@25 -- # echo -e '\x4a' 00:37:56.361 13:00:15 -- target/invalid.sh@25 -- # string+=J 00:37:56.361 13:00:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:37:56.361 13:00:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:37:56.361 13:00:15 -- target/invalid.sh@25 -- # printf %x 127 00:37:56.361 13:00:15 -- target/invalid.sh@25 -- # echo -e '\x7f' 00:37:56.361 13:00:15 -- target/invalid.sh@25 -- # string+=$'\177' 00:37:56.361 13:00:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:37:56.361 13:00:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:37:56.361 13:00:15 -- target/invalid.sh@25 -- # printf %x 124 00:37:56.361 13:00:15 -- target/invalid.sh@25 -- # echo -e '\x7c' 00:37:56.361 13:00:15 -- target/invalid.sh@25 -- # string+='|' 00:37:56.361 13:00:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:37:56.361 13:00:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:37:56.361 13:00:15 -- target/invalid.sh@25 -- # printf %x 68 00:37:56.361 13:00:15 -- target/invalid.sh@25 -- # echo -e '\x44' 00:37:56.361 13:00:15 -- target/invalid.sh@25 -- # string+=D 00:37:56.361 13:00:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:37:56.361 13:00:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:37:56.361 13:00:15 -- target/invalid.sh@25 -- # printf %x 41 00:37:56.361 13:00:15 -- target/invalid.sh@25 -- # echo -e '\x29' 00:37:56.361 13:00:15 -- target/invalid.sh@25 -- # string+=')' 00:37:56.361 13:00:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:37:56.361 13:00:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:37:56.361 13:00:15 -- target/invalid.sh@25 -- # printf %x 44 00:37:56.361 13:00:15 -- target/invalid.sh@25 -- # echo -e '\x2c' 00:37:56.361 13:00:15 -- target/invalid.sh@25 -- # string+=, 00:37:56.361 13:00:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:37:56.361 13:00:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:37:56.361 13:00:15 -- target/invalid.sh@25 -- # printf %x 110 00:37:56.361 13:00:15 -- target/invalid.sh@25 -- # echo -e '\x6e' 00:37:56.361 13:00:15 -- target/invalid.sh@25 -- # string+=n 00:37:56.361 13:00:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:37:56.361 13:00:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:37:56.362 13:00:15 -- target/invalid.sh@25 -- # printf %x 78 00:37:56.362 13:00:15 -- target/invalid.sh@25 -- # echo -e '\x4e' 00:37:56.362 13:00:15 -- target/invalid.sh@25 -- # string+=N 00:37:56.362 13:00:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:37:56.362 13:00:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:37:56.362 13:00:15 -- target/invalid.sh@25 -- # printf %x 93 00:37:56.362 13:00:15 -- target/invalid.sh@25 -- # echo -e '\x5d' 00:37:56.362 13:00:15 -- target/invalid.sh@25 -- # string+=']' 00:37:56.362 13:00:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:37:56.362 13:00:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:37:56.362 13:00:15 -- target/invalid.sh@25 -- # printf %x 71 00:37:56.362 13:00:15 -- target/invalid.sh@25 -- # echo -e '\x47' 00:37:56.362 13:00:15 -- target/invalid.sh@25 -- # string+=G 00:37:56.362 13:00:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:37:56.362 13:00:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:37:56.362 13:00:15 -- target/invalid.sh@25 -- # printf %x 60 00:37:56.362 13:00:15 -- target/invalid.sh@25 -- # echo -e '\x3c' 00:37:56.362 13:00:15 -- target/invalid.sh@25 -- # string+='<' 00:37:56.362 13:00:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:37:56.362 13:00:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:37:56.362 13:00:15 -- target/invalid.sh@25 -- # printf %x 124 00:37:56.362 13:00:15 -- target/invalid.sh@25 -- # echo -e '\x7c' 00:37:56.362 13:00:15 -- target/invalid.sh@25 -- # string+='|' 00:37:56.362 13:00:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:37:56.362 13:00:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:37:56.362 13:00:15 -- target/invalid.sh@25 -- # printf %x 121 00:37:56.362 13:00:15 -- target/invalid.sh@25 -- # echo -e '\x79' 00:37:56.362 13:00:15 -- target/invalid.sh@25 -- # string+=y 00:37:56.362 13:00:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:37:56.362 13:00:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:37:56.362 13:00:15 -- target/invalid.sh@25 -- # printf %x 61 00:37:56.362 13:00:15 -- target/invalid.sh@25 -- # echo -e '\x3d' 00:37:56.362 13:00:15 -- target/invalid.sh@25 -- # string+== 00:37:56.362 13:00:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:37:56.362 13:00:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:37:56.362 13:00:15 -- target/invalid.sh@25 -- # printf %x 49 00:37:56.362 13:00:15 -- target/invalid.sh@25 -- # echo -e '\x31' 00:37:56.362 13:00:15 -- target/invalid.sh@25 -- # string+=1 00:37:56.362 13:00:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:37:56.362 13:00:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:37:56.362 13:00:15 -- target/invalid.sh@25 -- # printf %x 116 00:37:56.362 13:00:15 -- target/invalid.sh@25 -- # echo -e '\x74' 00:37:56.362 13:00:15 -- target/invalid.sh@25 -- # string+=t 00:37:56.362 13:00:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:37:56.362 13:00:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:37:56.362 13:00:15 -- target/invalid.sh@25 -- # printf %x 123 00:37:56.362 13:00:15 -- target/invalid.sh@25 -- # echo -e '\x7b' 00:37:56.362 13:00:15 -- target/invalid.sh@25 -- # string+='{' 00:37:56.362 13:00:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:37:56.362 13:00:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:37:56.362 13:00:15 -- target/invalid.sh@25 -- # printf %x 50 00:37:56.362 13:00:15 -- target/invalid.sh@25 -- # echo -e '\x32' 00:37:56.362 13:00:15 -- target/invalid.sh@25 -- # string+=2 00:37:56.362 13:00:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:37:56.362 13:00:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:37:56.362 13:00:15 -- target/invalid.sh@25 -- # printf %x 44 00:37:56.362 13:00:15 -- target/invalid.sh@25 -- # echo -e '\x2c' 00:37:56.362 13:00:15 -- target/invalid.sh@25 -- # string+=, 00:37:56.362 13:00:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:37:56.362 13:00:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:37:56.362 13:00:15 -- target/invalid.sh@25 -- # printf %x 112 00:37:56.362 13:00:15 -- target/invalid.sh@25 -- # echo -e '\x70' 00:37:56.362 13:00:15 -- target/invalid.sh@25 -- # string+=p 00:37:56.362 13:00:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:37:56.362 13:00:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:37:56.362 13:00:15 -- target/invalid.sh@25 -- # printf %x 110 00:37:56.362 13:00:15 -- target/invalid.sh@25 -- # echo -e '\x6e' 00:37:56.362 13:00:15 -- target/invalid.sh@25 -- # string+=n 00:37:56.362 13:00:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:37:56.362 13:00:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:37:56.362 13:00:15 -- target/invalid.sh@25 -- # printf %x 105 00:37:56.362 13:00:15 -- target/invalid.sh@25 -- # echo -e '\x69' 00:37:56.362 13:00:15 -- target/invalid.sh@25 -- # string+=i 00:37:56.362 13:00:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:37:56.362 13:00:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:37:56.362 13:00:15 -- target/invalid.sh@25 -- # printf %x 92 00:37:56.362 13:00:15 -- target/invalid.sh@25 -- # echo -e '\x5c' 00:37:56.362 13:00:15 -- target/invalid.sh@25 -- # string+='\' 00:37:56.362 13:00:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:37:56.362 13:00:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:37:56.620 13:00:15 -- target/invalid.sh@25 -- # printf %x 104 00:37:56.620 13:00:15 -- target/invalid.sh@25 -- # echo -e '\x68' 00:37:56.620 13:00:15 -- target/invalid.sh@25 -- # string+=h 00:37:56.620 13:00:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:37:56.620 13:00:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:37:56.620 13:00:15 -- target/invalid.sh@25 -- # printf %x 80 00:37:56.620 13:00:15 -- target/invalid.sh@25 -- # echo -e '\x50' 00:37:56.620 13:00:15 -- target/invalid.sh@25 -- # string+=P 00:37:56.620 13:00:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:37:56.620 13:00:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:37:56.620 13:00:15 -- target/invalid.sh@25 -- # printf %x 67 00:37:56.620 13:00:15 -- target/invalid.sh@25 -- # echo -e '\x43' 00:37:56.620 13:00:15 -- target/invalid.sh@25 -- # string+=C 00:37:56.620 13:00:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:37:56.620 13:00:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:37:56.620 13:00:15 -- target/invalid.sh@25 -- # printf %x 111 00:37:56.620 13:00:15 -- target/invalid.sh@25 -- # echo -e '\x6f' 00:37:56.620 13:00:15 -- target/invalid.sh@25 -- # string+=o 00:37:56.620 13:00:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:37:56.620 13:00:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:37:56.620 13:00:15 -- target/invalid.sh@25 -- # printf %x 67 00:37:56.620 13:00:15 -- target/invalid.sh@25 -- # echo -e '\x43' 00:37:56.620 13:00:15 -- target/invalid.sh@25 -- # string+=C 00:37:56.620 13:00:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:37:56.620 13:00:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:37:56.620 13:00:15 -- target/invalid.sh@25 -- # printf %x 39 00:37:56.620 13:00:15 -- target/invalid.sh@25 -- # echo -e '\x27' 00:37:56.620 13:00:15 -- target/invalid.sh@25 -- # string+=\' 00:37:56.620 13:00:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:37:56.620 13:00:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:37:56.620 13:00:15 -- target/invalid.sh@25 -- # printf %x 62 00:37:56.620 13:00:15 -- target/invalid.sh@25 -- # echo -e '\x3e' 00:37:56.620 13:00:15 -- target/invalid.sh@25 -- # string+='>' 00:37:56.620 13:00:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:37:56.620 13:00:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:37:56.620 13:00:15 -- target/invalid.sh@25 -- # printf %x 50 00:37:56.620 13:00:15 -- target/invalid.sh@25 -- # echo -e '\x32' 00:37:56.620 13:00:15 -- target/invalid.sh@25 -- # string+=2 00:37:56.620 13:00:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:37:56.620 13:00:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:37:56.620 13:00:15 -- target/invalid.sh@25 -- # printf %x 51 00:37:56.620 13:00:15 -- target/invalid.sh@25 -- # echo -e '\x33' 00:37:56.620 13:00:15 -- target/invalid.sh@25 -- # string+=3 00:37:56.620 13:00:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:37:56.620 13:00:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:37:56.620 13:00:15 -- target/invalid.sh@25 -- # printf %x 95 00:37:56.620 13:00:15 -- target/invalid.sh@25 -- # echo -e '\x5f' 00:37:56.620 13:00:15 -- target/invalid.sh@25 -- # string+=_ 00:37:56.620 13:00:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:37:56.620 13:00:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:37:56.620 13:00:15 -- target/invalid.sh@25 -- # printf %x 126 00:37:56.620 13:00:15 -- target/invalid.sh@25 -- # echo -e '\x7e' 00:37:56.620 13:00:15 -- target/invalid.sh@25 -- # string+='~' 00:37:56.621 13:00:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:37:56.621 13:00:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:37:56.621 13:00:15 -- target/invalid.sh@25 -- # printf %x 125 00:37:56.621 13:00:15 -- target/invalid.sh@25 -- # echo -e '\x7d' 00:37:56.621 13:00:15 -- target/invalid.sh@25 -- # string+='}' 00:37:56.621 13:00:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:37:56.621 13:00:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:37:56.621 13:00:15 -- target/invalid.sh@28 -- # [[ n == \- ]] 00:37:56.621 13:00:15 -- target/invalid.sh@31 -- # echo 'nQ%S&pJ|D),nN]G<|y=1t{2,pni\hPCoC'\''>23_~}' 00:37:56.621 13:00:15 -- target/invalid.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d 'nQ%S&pJ|D),nN]G<|y=1t{2,pni\hPCoC'\''>23_~}' nqn.2016-06.io.spdk:cnode24331 00:37:56.879 [2024-07-22 13:00:16.087146] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24331: invalid model number 'nQ%S&pJ|D),nN]G<|y=1t{2,pni\hPCoC'>23_~}' 00:37:56.879 13:00:16 -- target/invalid.sh@58 -- # out='2024/07/22 13:00:16 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:nQ%S&pJ|D),nN]G<|y=1t{2,pni\hPCoC'\''>23_~} nqn:nqn.2016-06.io.spdk:cnode24331], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN nQ%S&pJ|D),nN]G<|y=1t{2,pni\hPCoC'\''>23_~} 00:37:56.879 request: 00:37:56.879 { 00:37:56.879 "method": "nvmf_create_subsystem", 00:37:56.879 "params": { 00:37:56.879 "nqn": "nqn.2016-06.io.spdk:cnode24331", 00:37:56.879 "model_number": "nQ%S&pJ\u007f|D),nN]G<|y=1t{2,pni\\hPCoC'\''>23_~}" 00:37:56.879 } 00:37:56.879 } 00:37:56.879 Got JSON-RPC error response 00:37:56.879 GoRPCClient: error on JSON-RPC call' 00:37:56.879 13:00:16 -- target/invalid.sh@59 -- # [[ 2024/07/22 13:00:16 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:nQ%S&pJ|D),nN]G<|y=1t{2,pni\hPCoC'>23_~} nqn:nqn.2016-06.io.spdk:cnode24331], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN nQ%S&pJ|D),nN]G<|y=1t{2,pni\hPCoC'>23_~} 00:37:56.879 request: 00:37:56.879 { 00:37:56.879 "method": "nvmf_create_subsystem", 00:37:56.879 "params": { 00:37:56.879 "nqn": "nqn.2016-06.io.spdk:cnode24331", 00:37:56.879 "model_number": "nQ%S&pJ\u007f|D),nN]G<|y=1t{2,pni\\hPCoC'>23_~}" 00:37:56.879 } 00:37:56.879 } 00:37:56.879 Got JSON-RPC error response 00:37:56.879 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:37:56.879 13:00:16 -- target/invalid.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:37:57.136 [2024-07-22 13:00:16.367399] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:57.136 13:00:16 -- target/invalid.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:37:57.394 13:00:16 -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:37:57.394 13:00:16 -- target/invalid.sh@67 -- # echo '' 00:37:57.394 13:00:16 -- target/invalid.sh@67 -- # head -n 1 00:37:57.394 13:00:16 -- target/invalid.sh@67 -- # IP= 00:37:57.394 13:00:16 -- target/invalid.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:37:57.652 [2024-07-22 13:00:16.907534] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:37:57.652 13:00:16 -- target/invalid.sh@69 -- # out='2024/07/22 13:00:16 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:37:57.652 request: 00:37:57.652 { 00:37:57.652 "method": "nvmf_subsystem_remove_listener", 00:37:57.652 "params": { 00:37:57.652 "nqn": "nqn.2016-06.io.spdk:cnode", 00:37:57.652 "listen_address": { 00:37:57.652 "trtype": "tcp", 00:37:57.652 "traddr": "", 00:37:57.652 "trsvcid": "4421" 00:37:57.652 } 00:37:57.652 } 00:37:57.652 } 00:37:57.652 Got JSON-RPC error response 00:37:57.652 GoRPCClient: error on JSON-RPC call' 00:37:57.652 13:00:16 -- target/invalid.sh@70 -- # [[ 2024/07/22 13:00:16 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:37:57.652 request: 00:37:57.652 { 00:37:57.652 "method": "nvmf_subsystem_remove_listener", 00:37:57.652 "params": { 00:37:57.652 "nqn": "nqn.2016-06.io.spdk:cnode", 00:37:57.652 "listen_address": { 00:37:57.652 "trtype": "tcp", 00:37:57.652 "traddr": "", 00:37:57.652 "trsvcid": "4421" 00:37:57.652 } 00:37:57.652 } 00:37:57.652 } 00:37:57.652 Got JSON-RPC error response 00:37:57.652 GoRPCClient: error on JSON-RPC call != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:37:57.652 13:00:16 -- target/invalid.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8647 -i 0 00:37:57.909 [2024-07-22 13:00:17.147740] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8647: invalid cntlid range [0-65519] 00:37:57.909 13:00:17 -- target/invalid.sh@73 -- # out='2024/07/22 13:00:17 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode8647], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:37:57.909 request: 00:37:57.909 { 00:37:57.909 "method": "nvmf_create_subsystem", 00:37:57.909 "params": { 00:37:57.909 "nqn": "nqn.2016-06.io.spdk:cnode8647", 00:37:57.909 "min_cntlid": 0 00:37:57.909 } 00:37:57.909 } 00:37:57.909 Got JSON-RPC error response 00:37:57.909 GoRPCClient: error on JSON-RPC call' 00:37:57.909 13:00:17 -- target/invalid.sh@74 -- # [[ 2024/07/22 13:00:17 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode8647], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:37:57.909 request: 00:37:57.909 { 00:37:57.909 "method": "nvmf_create_subsystem", 00:37:57.909 "params": { 00:37:57.909 "nqn": "nqn.2016-06.io.spdk:cnode8647", 00:37:57.909 "min_cntlid": 0 00:37:57.909 } 00:37:57.909 } 00:37:57.909 Got JSON-RPC error response 00:37:57.909 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:37:57.909 13:00:17 -- target/invalid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode25025 -i 65520 00:37:58.166 [2024-07-22 13:00:17.379974] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25025: invalid cntlid range [65520-65519] 00:37:58.166 13:00:17 -- target/invalid.sh@75 -- # out='2024/07/22 13:00:17 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode25025], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:37:58.166 request: 00:37:58.166 { 00:37:58.166 "method": "nvmf_create_subsystem", 00:37:58.166 "params": { 00:37:58.166 "nqn": "nqn.2016-06.io.spdk:cnode25025", 00:37:58.166 "min_cntlid": 65520 00:37:58.166 } 00:37:58.166 } 00:37:58.166 Got JSON-RPC error response 00:37:58.166 GoRPCClient: error on JSON-RPC call' 00:37:58.167 13:00:17 -- target/invalid.sh@76 -- # [[ 2024/07/22 13:00:17 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode25025], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:37:58.167 request: 00:37:58.167 { 00:37:58.167 "method": "nvmf_create_subsystem", 00:37:58.167 "params": { 00:37:58.167 "nqn": "nqn.2016-06.io.spdk:cnode25025", 00:37:58.167 "min_cntlid": 65520 00:37:58.167 } 00:37:58.167 } 00:37:58.167 Got JSON-RPC error response 00:37:58.167 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:37:58.167 13:00:17 -- target/invalid.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode26564 -I 0 00:37:58.424 [2024-07-22 13:00:17.656252] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26564: invalid cntlid range [1-0] 00:37:58.424 13:00:17 -- target/invalid.sh@77 -- # out='2024/07/22 13:00:17 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode26564], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:37:58.424 request: 00:37:58.424 { 00:37:58.424 "method": "nvmf_create_subsystem", 00:37:58.424 "params": { 00:37:58.424 "nqn": "nqn.2016-06.io.spdk:cnode26564", 00:37:58.424 "max_cntlid": 0 00:37:58.424 } 00:37:58.424 } 00:37:58.424 Got JSON-RPC error response 00:37:58.424 GoRPCClient: error on JSON-RPC call' 00:37:58.424 13:00:17 -- target/invalid.sh@78 -- # [[ 2024/07/22 13:00:17 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode26564], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:37:58.424 request: 00:37:58.424 { 00:37:58.424 "method": "nvmf_create_subsystem", 00:37:58.424 "params": { 00:37:58.424 "nqn": "nqn.2016-06.io.spdk:cnode26564", 00:37:58.424 "max_cntlid": 0 00:37:58.424 } 00:37:58.424 } 00:37:58.424 Got JSON-RPC error response 00:37:58.424 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:37:58.424 13:00:17 -- target/invalid.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode13264 -I 65520 00:37:58.748 [2024-07-22 13:00:17.888487] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13264: invalid cntlid range [1-65520] 00:37:58.748 13:00:17 -- target/invalid.sh@79 -- # out='2024/07/22 13:00:17 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode13264], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:37:58.748 request: 00:37:58.748 { 00:37:58.748 "method": "nvmf_create_subsystem", 00:37:58.748 "params": { 00:37:58.748 "nqn": "nqn.2016-06.io.spdk:cnode13264", 00:37:58.748 "max_cntlid": 65520 00:37:58.748 } 00:37:58.748 } 00:37:58.748 Got JSON-RPC error response 00:37:58.748 GoRPCClient: error on JSON-RPC call' 00:37:58.748 13:00:17 -- target/invalid.sh@80 -- # [[ 2024/07/22 13:00:17 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode13264], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:37:58.748 request: 00:37:58.748 { 00:37:58.748 "method": "nvmf_create_subsystem", 00:37:58.748 "params": { 00:37:58.748 "nqn": "nqn.2016-06.io.spdk:cnode13264", 00:37:58.748 "max_cntlid": 65520 00:37:58.748 } 00:37:58.748 } 00:37:58.748 Got JSON-RPC error response 00:37:58.748 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:37:58.748 13:00:17 -- target/invalid.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode24162 -i 6 -I 5 00:37:59.006 [2024-07-22 13:00:18.168760] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24162: invalid cntlid range [6-5] 00:37:59.006 13:00:18 -- target/invalid.sh@83 -- # out='2024/07/22 13:00:18 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode24162], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:37:59.006 request: 00:37:59.006 { 00:37:59.006 "method": "nvmf_create_subsystem", 00:37:59.006 "params": { 00:37:59.006 "nqn": "nqn.2016-06.io.spdk:cnode24162", 00:37:59.006 "min_cntlid": 6, 00:37:59.006 "max_cntlid": 5 00:37:59.006 } 00:37:59.006 } 00:37:59.006 Got JSON-RPC error response 00:37:59.006 GoRPCClient: error on JSON-RPC call' 00:37:59.006 13:00:18 -- target/invalid.sh@84 -- # [[ 2024/07/22 13:00:18 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode24162], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:37:59.006 request: 00:37:59.006 { 00:37:59.006 "method": "nvmf_create_subsystem", 00:37:59.006 "params": { 00:37:59.006 "nqn": "nqn.2016-06.io.spdk:cnode24162", 00:37:59.006 "min_cntlid": 6, 00:37:59.006 "max_cntlid": 5 00:37:59.006 } 00:37:59.006 } 00:37:59.006 Got JSON-RPC error response 00:37:59.006 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:37:59.006 13:00:18 -- target/invalid.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:37:59.006 13:00:18 -- target/invalid.sh@87 -- # out='request: 00:37:59.006 { 00:37:59.006 "name": "foobar", 00:37:59.006 "method": "nvmf_delete_target", 00:37:59.006 "req_id": 1 00:37:59.006 } 00:37:59.006 Got JSON-RPC error response 00:37:59.006 response: 00:37:59.006 { 00:37:59.006 "code": -32602, 00:37:59.006 "message": "The specified target doesn'\''t exist, cannot delete it." 00:37:59.006 }' 00:37:59.006 13:00:18 -- target/invalid.sh@88 -- # [[ request: 00:37:59.006 { 00:37:59.006 "name": "foobar", 00:37:59.006 "method": "nvmf_delete_target", 00:37:59.006 "req_id": 1 00:37:59.006 } 00:37:59.006 Got JSON-RPC error response 00:37:59.006 response: 00:37:59.006 { 00:37:59.006 "code": -32602, 00:37:59.006 "message": "The specified target doesn't exist, cannot delete it." 00:37:59.006 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:37:59.006 13:00:18 -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:37:59.006 13:00:18 -- target/invalid.sh@91 -- # nvmftestfini 00:37:59.006 13:00:18 -- nvmf/common.sh@476 -- # nvmfcleanup 00:37:59.006 13:00:18 -- nvmf/common.sh@116 -- # sync 00:37:59.006 13:00:18 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:37:59.006 13:00:18 -- nvmf/common.sh@119 -- # set +e 00:37:59.006 13:00:18 -- nvmf/common.sh@120 -- # for i in {1..20} 00:37:59.006 13:00:18 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:37:59.006 rmmod nvme_tcp 00:37:59.006 rmmod nvme_fabrics 00:37:59.006 rmmod nvme_keyring 00:37:59.006 13:00:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:37:59.265 13:00:18 -- nvmf/common.sh@123 -- # set -e 00:37:59.265 13:00:18 -- nvmf/common.sh@124 -- # return 0 00:37:59.265 13:00:18 -- nvmf/common.sh@477 -- # '[' -n 77962 ']' 00:37:59.265 13:00:18 -- nvmf/common.sh@478 -- # killprocess 77962 00:37:59.265 13:00:18 -- common/autotest_common.sh@926 -- # '[' -z 77962 ']' 00:37:59.265 13:00:18 -- common/autotest_common.sh@930 -- # kill -0 77962 00:37:59.265 13:00:18 -- common/autotest_common.sh@931 -- # uname 00:37:59.265 13:00:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:37:59.265 13:00:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 77962 00:37:59.265 killing process with pid 77962 00:37:59.265 13:00:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:37:59.265 13:00:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:37:59.265 13:00:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 77962' 00:37:59.265 13:00:18 -- common/autotest_common.sh@945 -- # kill 77962 00:37:59.265 13:00:18 -- common/autotest_common.sh@950 -- # wait 77962 00:37:59.265 13:00:18 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:37:59.265 13:00:18 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:37:59.265 13:00:18 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:37:59.265 13:00:18 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:37:59.265 13:00:18 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:37:59.265 13:00:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:59.265 13:00:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:59.265 13:00:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:59.523 13:00:18 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:37:59.523 ************************************ 00:37:59.523 END TEST nvmf_invalid 00:37:59.523 ************************************ 00:37:59.523 00:37:59.523 real 0m5.781s 00:37:59.523 user 0m23.347s 00:37:59.523 sys 0m1.183s 00:37:59.523 13:00:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:59.523 13:00:18 -- common/autotest_common.sh@10 -- # set +x 00:37:59.523 13:00:18 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:37:59.523 13:00:18 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:37:59.523 13:00:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:37:59.523 13:00:18 -- common/autotest_common.sh@10 -- # set +x 00:37:59.523 ************************************ 00:37:59.523 START TEST nvmf_abort 00:37:59.523 ************************************ 00:37:59.523 13:00:18 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:37:59.523 * Looking for test storage... 00:37:59.523 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:37:59.523 13:00:18 -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:37:59.523 13:00:18 -- nvmf/common.sh@7 -- # uname -s 00:37:59.523 13:00:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:59.523 13:00:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:59.523 13:00:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:59.523 13:00:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:59.523 13:00:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:59.523 13:00:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:59.523 13:00:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:59.523 13:00:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:59.523 13:00:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:59.523 13:00:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:59.523 13:00:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:864ae4a3-cd96-439b-8ef2-6dab4d992115 00:37:59.523 13:00:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=864ae4a3-cd96-439b-8ef2-6dab4d992115 00:37:59.523 13:00:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:59.523 13:00:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:59.523 13:00:18 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:37:59.523 13:00:18 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:37:59.523 13:00:18 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:59.523 13:00:18 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:59.523 13:00:18 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:59.523 13:00:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:59.523 13:00:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:59.523 13:00:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:59.523 13:00:18 -- paths/export.sh@5 -- # export PATH 00:37:59.523 13:00:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:59.523 13:00:18 -- nvmf/common.sh@46 -- # : 0 00:37:59.523 13:00:18 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:37:59.523 13:00:18 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:37:59.523 13:00:18 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:37:59.523 13:00:18 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:59.523 13:00:18 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:59.523 13:00:18 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:37:59.523 13:00:18 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:37:59.523 13:00:18 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:37:59.523 13:00:18 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:59.523 13:00:18 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:37:59.523 13:00:18 -- target/abort.sh@14 -- # nvmftestinit 00:37:59.524 13:00:18 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:37:59.524 13:00:18 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:59.524 13:00:18 -- nvmf/common.sh@436 -- # prepare_net_devs 00:37:59.524 13:00:18 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:37:59.524 13:00:18 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:37:59.524 13:00:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:59.524 13:00:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:59.524 13:00:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:59.524 13:00:18 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:37:59.524 13:00:18 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:37:59.524 13:00:18 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:37:59.524 13:00:18 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:37:59.524 13:00:18 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:37:59.524 13:00:18 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:37:59.524 13:00:18 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:59.524 13:00:18 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:59.524 13:00:18 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:37:59.524 13:00:18 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:37:59.524 13:00:18 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:37:59.524 13:00:18 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:37:59.524 13:00:18 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:37:59.524 13:00:18 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:59.524 13:00:18 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:37:59.524 13:00:18 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:37:59.524 13:00:18 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:37:59.524 13:00:18 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:37:59.524 13:00:18 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:37:59.524 13:00:18 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:37:59.524 Cannot find device "nvmf_tgt_br" 00:37:59.524 13:00:18 -- nvmf/common.sh@154 -- # true 00:37:59.524 13:00:18 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:37:59.524 Cannot find device "nvmf_tgt_br2" 00:37:59.524 13:00:18 -- nvmf/common.sh@155 -- # true 00:37:59.524 13:00:18 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:37:59.524 13:00:18 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:37:59.524 Cannot find device "nvmf_tgt_br" 00:37:59.524 13:00:18 -- nvmf/common.sh@157 -- # true 00:37:59.524 13:00:18 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:37:59.524 Cannot find device "nvmf_tgt_br2" 00:37:59.524 13:00:18 -- nvmf/common.sh@158 -- # true 00:37:59.524 13:00:18 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:37:59.781 13:00:18 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:37:59.781 13:00:18 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:37:59.781 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:37:59.781 13:00:18 -- nvmf/common.sh@161 -- # true 00:37:59.781 13:00:18 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:37:59.781 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:37:59.781 13:00:18 -- nvmf/common.sh@162 -- # true 00:37:59.781 13:00:18 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:37:59.781 13:00:19 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:37:59.781 13:00:19 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:37:59.781 13:00:19 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:37:59.781 13:00:19 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:37:59.781 13:00:19 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:37:59.781 13:00:19 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:37:59.781 13:00:19 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:37:59.781 13:00:19 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:37:59.781 13:00:19 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:37:59.781 13:00:19 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:37:59.781 13:00:19 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:37:59.781 13:00:19 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:37:59.781 13:00:19 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:37:59.781 13:00:19 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:37:59.781 13:00:19 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:37:59.781 13:00:19 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:37:59.781 13:00:19 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:37:59.781 13:00:19 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:37:59.781 13:00:19 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:37:59.781 13:00:19 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:37:59.781 13:00:19 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:37:59.781 13:00:19 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:37:59.781 13:00:19 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:37:59.781 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:59.781 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:37:59.781 00:37:59.781 --- 10.0.0.2 ping statistics --- 00:37:59.781 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:59.781 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:37:59.781 13:00:19 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:37:59.781 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:37:59.781 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.115 ms 00:37:59.781 00:37:59.781 --- 10.0.0.3 ping statistics --- 00:37:59.781 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:59.781 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:37:59.781 13:00:19 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:37:59.782 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:59.782 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:37:59.782 00:37:59.782 --- 10.0.0.1 ping statistics --- 00:37:59.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:59.782 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:37:59.782 13:00:19 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:59.782 13:00:19 -- nvmf/common.sh@421 -- # return 0 00:37:59.782 13:00:19 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:37:59.782 13:00:19 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:59.782 13:00:19 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:37:59.782 13:00:19 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:37:59.782 13:00:19 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:59.782 13:00:19 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:37:59.782 13:00:19 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:37:59.782 13:00:19 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:37:59.782 13:00:19 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:37:59.782 13:00:19 -- common/autotest_common.sh@712 -- # xtrace_disable 00:37:59.782 13:00:19 -- common/autotest_common.sh@10 -- # set +x 00:37:59.782 13:00:19 -- nvmf/common.sh@469 -- # nvmfpid=78466 00:37:59.782 13:00:19 -- nvmf/common.sh@470 -- # waitforlisten 78466 00:37:59.782 13:00:19 -- common/autotest_common.sh@819 -- # '[' -z 78466 ']' 00:37:59.782 13:00:19 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:37:59.782 13:00:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:59.782 13:00:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:38:00.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:00.039 13:00:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:00.039 13:00:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:38:00.039 13:00:19 -- common/autotest_common.sh@10 -- # set +x 00:38:00.039 [2024-07-22 13:00:19.260780] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:38:00.039 [2024-07-22 13:00:19.260878] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:00.039 [2024-07-22 13:00:19.402923] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:38:00.297 [2024-07-22 13:00:19.511463] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:38:00.297 [2024-07-22 13:00:19.511648] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:00.297 [2024-07-22 13:00:19.511661] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:00.297 [2024-07-22 13:00:19.511685] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:00.297 [2024-07-22 13:00:19.511842] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:38:00.297 [2024-07-22 13:00:19.512301] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:38:00.297 [2024-07-22 13:00:19.512312] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:38:00.861 13:00:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:38:00.861 13:00:20 -- common/autotest_common.sh@852 -- # return 0 00:38:00.861 13:00:20 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:38:00.861 13:00:20 -- common/autotest_common.sh@718 -- # xtrace_disable 00:38:00.861 13:00:20 -- common/autotest_common.sh@10 -- # set +x 00:38:00.861 13:00:20 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:00.861 13:00:20 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:38:00.861 13:00:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:38:00.861 13:00:20 -- common/autotest_common.sh@10 -- # set +x 00:38:00.861 [2024-07-22 13:00:20.229644] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:00.861 13:00:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:38:00.861 13:00:20 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:38:00.861 13:00:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:38:00.861 13:00:20 -- common/autotest_common.sh@10 -- # set +x 00:38:00.861 Malloc0 00:38:00.861 13:00:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:38:00.861 13:00:20 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:38:00.862 13:00:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:38:00.862 13:00:20 -- common/autotest_common.sh@10 -- # set +x 00:38:00.862 Delay0 00:38:00.862 13:00:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:38:00.862 13:00:20 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:38:00.862 13:00:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:38:00.862 13:00:20 -- common/autotest_common.sh@10 -- # set +x 00:38:01.119 13:00:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:38:01.119 13:00:20 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:38:01.119 13:00:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:38:01.119 13:00:20 -- common/autotest_common.sh@10 -- # set +x 00:38:01.119 13:00:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:38:01.119 13:00:20 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:01.119 13:00:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:38:01.119 13:00:20 -- common/autotest_common.sh@10 -- # set +x 00:38:01.119 [2024-07-22 13:00:20.299536] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:01.119 13:00:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:38:01.119 13:00:20 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:01.119 13:00:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:38:01.119 13:00:20 -- common/autotest_common.sh@10 -- # set +x 00:38:01.119 13:00:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:38:01.119 13:00:20 -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:38:01.119 [2024-07-22 13:00:20.479505] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:38:03.644 Initializing NVMe Controllers 00:38:03.644 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:38:03.644 controller IO queue size 128 less than required 00:38:03.644 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:38:03.644 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:38:03.644 Initialization complete. Launching workers. 00:38:03.644 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 32918 00:38:03.644 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 32983, failed to submit 62 00:38:03.644 success 32918, unsuccess 65, failed 0 00:38:03.644 13:00:22 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:03.644 13:00:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:38:03.644 13:00:22 -- common/autotest_common.sh@10 -- # set +x 00:38:03.644 13:00:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:38:03.644 13:00:22 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:38:03.644 13:00:22 -- target/abort.sh@38 -- # nvmftestfini 00:38:03.644 13:00:22 -- nvmf/common.sh@476 -- # nvmfcleanup 00:38:03.645 13:00:22 -- nvmf/common.sh@116 -- # sync 00:38:03.645 13:00:22 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:38:03.645 13:00:22 -- nvmf/common.sh@119 -- # set +e 00:38:03.645 13:00:22 -- nvmf/common.sh@120 -- # for i in {1..20} 00:38:03.645 13:00:22 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:38:03.645 rmmod nvme_tcp 00:38:03.645 rmmod nvme_fabrics 00:38:03.645 rmmod nvme_keyring 00:38:03.645 13:00:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:38:03.645 13:00:22 -- nvmf/common.sh@123 -- # set -e 00:38:03.645 13:00:22 -- nvmf/common.sh@124 -- # return 0 00:38:03.645 13:00:22 -- nvmf/common.sh@477 -- # '[' -n 78466 ']' 00:38:03.645 13:00:22 -- nvmf/common.sh@478 -- # killprocess 78466 00:38:03.645 13:00:22 -- common/autotest_common.sh@926 -- # '[' -z 78466 ']' 00:38:03.645 13:00:22 -- common/autotest_common.sh@930 -- # kill -0 78466 00:38:03.645 13:00:22 -- common/autotest_common.sh@931 -- # uname 00:38:03.645 13:00:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:38:03.645 13:00:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 78466 00:38:03.645 killing process with pid 78466 00:38:03.645 13:00:22 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:38:03.645 13:00:22 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:38:03.645 13:00:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 78466' 00:38:03.645 13:00:22 -- common/autotest_common.sh@945 -- # kill 78466 00:38:03.645 13:00:22 -- common/autotest_common.sh@950 -- # wait 78466 00:38:03.645 13:00:22 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:38:03.645 13:00:22 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:38:03.645 13:00:22 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:38:03.645 13:00:22 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:38:03.645 13:00:22 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:38:03.645 13:00:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:03.645 13:00:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:38:03.645 13:00:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:03.645 13:00:22 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:38:03.645 00:38:03.645 real 0m4.167s 00:38:03.645 user 0m12.038s 00:38:03.645 sys 0m0.989s 00:38:03.645 13:00:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:03.645 13:00:22 -- common/autotest_common.sh@10 -- # set +x 00:38:03.645 ************************************ 00:38:03.645 END TEST nvmf_abort 00:38:03.645 ************************************ 00:38:03.645 13:00:22 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:38:03.645 13:00:22 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:38:03.645 13:00:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:38:03.645 13:00:22 -- common/autotest_common.sh@10 -- # set +x 00:38:03.645 ************************************ 00:38:03.645 START TEST nvmf_ns_hotplug_stress 00:38:03.645 ************************************ 00:38:03.645 13:00:22 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:38:03.645 * Looking for test storage... 00:38:03.645 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:38:03.645 13:00:23 -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:38:03.645 13:00:23 -- nvmf/common.sh@7 -- # uname -s 00:38:03.645 13:00:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:03.645 13:00:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:03.645 13:00:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:03.645 13:00:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:03.645 13:00:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:03.645 13:00:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:03.645 13:00:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:03.645 13:00:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:03.645 13:00:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:03.645 13:00:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:03.902 13:00:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:864ae4a3-cd96-439b-8ef2-6dab4d992115 00:38:03.902 13:00:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=864ae4a3-cd96-439b-8ef2-6dab4d992115 00:38:03.902 13:00:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:03.902 13:00:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:03.902 13:00:23 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:38:03.903 13:00:23 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:38:03.903 13:00:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:03.903 13:00:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:03.903 13:00:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:03.903 13:00:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:03.903 13:00:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:03.903 13:00:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:03.903 13:00:23 -- paths/export.sh@5 -- # export PATH 00:38:03.903 13:00:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:03.903 13:00:23 -- nvmf/common.sh@46 -- # : 0 00:38:03.903 13:00:23 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:38:03.903 13:00:23 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:38:03.903 13:00:23 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:38:03.903 13:00:23 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:03.903 13:00:23 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:03.903 13:00:23 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:38:03.903 13:00:23 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:38:03.903 13:00:23 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:38:03.903 13:00:23 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:38:03.903 13:00:23 -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:38:03.903 13:00:23 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:38:03.903 13:00:23 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:03.903 13:00:23 -- nvmf/common.sh@436 -- # prepare_net_devs 00:38:03.903 13:00:23 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:38:03.903 13:00:23 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:38:03.903 13:00:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:03.903 13:00:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:38:03.903 13:00:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:03.903 13:00:23 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:38:03.903 13:00:23 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:38:03.903 13:00:23 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:38:03.903 13:00:23 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:38:03.903 13:00:23 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:38:03.903 13:00:23 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:38:03.903 13:00:23 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:03.903 13:00:23 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:03.903 13:00:23 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:38:03.903 13:00:23 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:38:03.903 13:00:23 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:38:03.903 13:00:23 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:38:03.903 13:00:23 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:38:03.903 13:00:23 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:03.903 13:00:23 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:38:03.903 13:00:23 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:38:03.903 13:00:23 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:38:03.903 13:00:23 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:38:03.903 13:00:23 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:38:03.903 13:00:23 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:38:03.903 Cannot find device "nvmf_tgt_br" 00:38:03.903 13:00:23 -- nvmf/common.sh@154 -- # true 00:38:03.903 13:00:23 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:38:03.903 Cannot find device "nvmf_tgt_br2" 00:38:03.903 13:00:23 -- nvmf/common.sh@155 -- # true 00:38:03.903 13:00:23 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:38:03.903 13:00:23 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:38:03.903 Cannot find device "nvmf_tgt_br" 00:38:03.903 13:00:23 -- nvmf/common.sh@157 -- # true 00:38:03.903 13:00:23 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:38:03.903 Cannot find device "nvmf_tgt_br2" 00:38:03.903 13:00:23 -- nvmf/common.sh@158 -- # true 00:38:03.903 13:00:23 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:38:03.903 13:00:23 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:38:03.903 13:00:23 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:38:03.903 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:38:03.903 13:00:23 -- nvmf/common.sh@161 -- # true 00:38:03.903 13:00:23 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:38:03.903 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:38:03.903 13:00:23 -- nvmf/common.sh@162 -- # true 00:38:03.903 13:00:23 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:38:03.903 13:00:23 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:38:03.903 13:00:23 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:38:03.903 13:00:23 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:38:03.903 13:00:23 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:38:03.903 13:00:23 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:38:03.903 13:00:23 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:38:03.903 13:00:23 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:38:03.903 13:00:23 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:38:03.903 13:00:23 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:38:03.903 13:00:23 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:38:03.903 13:00:23 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:38:03.903 13:00:23 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:38:03.903 13:00:23 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:38:03.903 13:00:23 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:38:04.161 13:00:23 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:38:04.161 13:00:23 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:38:04.161 13:00:23 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:38:04.161 13:00:23 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:38:04.161 13:00:23 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:38:04.161 13:00:23 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:38:04.161 13:00:23 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:38:04.161 13:00:23 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:38:04.161 13:00:23 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:38:04.161 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:04.161 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:38:04.161 00:38:04.161 --- 10.0.0.2 ping statistics --- 00:38:04.161 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:04.161 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:38:04.161 13:00:23 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:38:04.161 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:38:04.161 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.033 ms 00:38:04.161 00:38:04.161 --- 10.0.0.3 ping statistics --- 00:38:04.161 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:04.161 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:38:04.161 13:00:23 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:38:04.161 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:04.161 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:38:04.161 00:38:04.161 --- 10.0.0.1 ping statistics --- 00:38:04.161 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:04.161 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:38:04.161 13:00:23 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:04.161 13:00:23 -- nvmf/common.sh@421 -- # return 0 00:38:04.161 13:00:23 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:38:04.161 13:00:23 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:04.161 13:00:23 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:38:04.161 13:00:23 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:38:04.161 13:00:23 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:04.161 13:00:23 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:38:04.161 13:00:23 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:38:04.161 13:00:23 -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:38:04.161 13:00:23 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:38:04.161 13:00:23 -- common/autotest_common.sh@712 -- # xtrace_disable 00:38:04.161 13:00:23 -- common/autotest_common.sh@10 -- # set +x 00:38:04.161 13:00:23 -- nvmf/common.sh@469 -- # nvmfpid=78727 00:38:04.161 13:00:23 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:38:04.161 13:00:23 -- nvmf/common.sh@470 -- # waitforlisten 78727 00:38:04.161 13:00:23 -- common/autotest_common.sh@819 -- # '[' -z 78727 ']' 00:38:04.161 13:00:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:04.161 13:00:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:38:04.161 13:00:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:04.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:04.161 13:00:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:38:04.161 13:00:23 -- common/autotest_common.sh@10 -- # set +x 00:38:04.161 [2024-07-22 13:00:23.468929] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:38:04.161 [2024-07-22 13:00:23.469010] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:04.419 [2024-07-22 13:00:23.604608] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:38:04.419 [2024-07-22 13:00:23.693580] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:38:04.419 [2024-07-22 13:00:23.694051] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:04.419 [2024-07-22 13:00:23.694209] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:04.419 [2024-07-22 13:00:23.694341] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:04.420 [2024-07-22 13:00:23.694669] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:38:04.420 [2024-07-22 13:00:23.694788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:38:04.420 [2024-07-22 13:00:23.694874] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:38:05.352 13:00:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:38:05.352 13:00:24 -- common/autotest_common.sh@852 -- # return 0 00:38:05.352 13:00:24 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:38:05.352 13:00:24 -- common/autotest_common.sh@718 -- # xtrace_disable 00:38:05.352 13:00:24 -- common/autotest_common.sh@10 -- # set +x 00:38:05.352 13:00:24 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:05.352 13:00:24 -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:38:05.352 13:00:24 -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:38:05.352 [2024-07-22 13:00:24.733756] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:05.352 13:00:24 -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:38:05.609 13:00:25 -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:05.866 [2024-07-22 13:00:25.236333] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:05.866 13:00:25 -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:06.124 13:00:25 -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:38:06.382 Malloc0 00:38:06.382 13:00:25 -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:38:06.639 Delay0 00:38:06.639 13:00:25 -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:06.897 13:00:26 -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:38:07.155 NULL1 00:38:07.155 13:00:26 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:38:07.413 13:00:26 -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=78858 00:38:07.413 13:00:26 -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:38:07.413 13:00:26 -- target/ns_hotplug_stress.sh@44 -- # kill -0 78858 00:38:07.413 13:00:26 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:08.787 Read completed with error (sct=0, sc=11) 00:38:08.787 13:00:27 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:08.787 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:08.787 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:08.787 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:08.787 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:08.787 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:08.787 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:08.787 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:08.787 13:00:28 -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:38:08.787 13:00:28 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:38:09.045 true 00:38:09.045 13:00:28 -- target/ns_hotplug_stress.sh@44 -- # kill -0 78858 00:38:09.045 13:00:28 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:09.979 13:00:29 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:10.237 13:00:29 -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:38:10.237 13:00:29 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:38:10.495 true 00:38:10.495 13:00:29 -- target/ns_hotplug_stress.sh@44 -- # kill -0 78858 00:38:10.495 13:00:29 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:10.753 13:00:29 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:11.012 13:00:30 -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:38:11.012 13:00:30 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:38:11.270 true 00:38:11.270 13:00:30 -- target/ns_hotplug_stress.sh@44 -- # kill -0 78858 00:38:11.270 13:00:30 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:11.549 13:00:30 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:11.868 13:00:31 -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:38:11.868 13:00:31 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:38:11.868 true 00:38:11.868 13:00:31 -- target/ns_hotplug_stress.sh@44 -- # kill -0 78858 00:38:11.868 13:00:31 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:12.803 13:00:32 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:13.061 13:00:32 -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:38:13.061 13:00:32 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:38:13.319 true 00:38:13.319 13:00:32 -- target/ns_hotplug_stress.sh@44 -- # kill -0 78858 00:38:13.320 13:00:32 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:13.602 13:00:32 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:13.862 13:00:33 -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:38:13.862 13:00:33 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:38:14.120 true 00:38:14.120 13:00:33 -- target/ns_hotplug_stress.sh@44 -- # kill -0 78858 00:38:14.120 13:00:33 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:14.378 13:00:33 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:14.636 13:00:33 -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:38:14.636 13:00:33 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:38:14.894 true 00:38:14.894 13:00:34 -- target/ns_hotplug_stress.sh@44 -- # kill -0 78858 00:38:14.894 13:00:34 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:15.829 13:00:35 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:16.088 13:00:35 -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:38:16.088 13:00:35 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:38:16.346 true 00:38:16.346 13:00:35 -- target/ns_hotplug_stress.sh@44 -- # kill -0 78858 00:38:16.346 13:00:35 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:16.605 13:00:35 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:16.863 13:00:36 -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:38:16.863 13:00:36 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:38:17.121 true 00:38:17.121 13:00:36 -- target/ns_hotplug_stress.sh@44 -- # kill -0 78858 00:38:17.121 13:00:36 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:18.056 13:00:37 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:18.056 13:00:37 -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:38:18.056 13:00:37 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:38:18.314 true 00:38:18.314 13:00:37 -- target/ns_hotplug_stress.sh@44 -- # kill -0 78858 00:38:18.314 13:00:37 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:18.880 13:00:38 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:18.880 13:00:38 -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:38:18.880 13:00:38 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:38:19.139 true 00:38:19.139 13:00:38 -- target/ns_hotplug_stress.sh@44 -- # kill -0 78858 00:38:19.139 13:00:38 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:19.397 13:00:38 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:19.656 13:00:38 -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:38:19.656 13:00:38 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:38:19.914 true 00:38:19.914 13:00:39 -- target/ns_hotplug_stress.sh@44 -- # kill -0 78858 00:38:19.914 13:00:39 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:20.849 13:00:40 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:21.107 13:00:40 -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:38:21.107 13:00:40 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:38:21.365 true 00:38:21.365 13:00:40 -- target/ns_hotplug_stress.sh@44 -- # kill -0 78858 00:38:21.365 13:00:40 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:21.623 13:00:41 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:21.881 13:00:41 -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:38:21.881 13:00:41 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:38:22.140 true 00:38:22.140 13:00:41 -- target/ns_hotplug_stress.sh@44 -- # kill -0 78858 00:38:22.140 13:00:41 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:22.398 13:00:41 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:22.656 13:00:41 -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:38:22.656 13:00:41 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:38:22.914 true 00:38:22.914 13:00:42 -- target/ns_hotplug_stress.sh@44 -- # kill -0 78858 00:38:22.914 13:00:42 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:23.848 13:00:43 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:24.106 13:00:43 -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:38:24.106 13:00:43 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:38:24.364 true 00:38:24.364 13:00:43 -- target/ns_hotplug_stress.sh@44 -- # kill -0 78858 00:38:24.364 13:00:43 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:24.622 13:00:43 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:24.880 13:00:44 -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:38:24.880 13:00:44 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:38:25.139 true 00:38:25.139 13:00:44 -- target/ns_hotplug_stress.sh@44 -- # kill -0 78858 00:38:25.139 13:00:44 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:25.397 13:00:44 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:25.655 13:00:44 -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:38:25.655 13:00:44 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:38:25.913 true 00:38:25.913 13:00:45 -- target/ns_hotplug_stress.sh@44 -- # kill -0 78858 00:38:25.913 13:00:45 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:26.847 13:00:46 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:27.106 13:00:46 -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:38:27.106 13:00:46 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:38:27.364 true 00:38:27.364 13:00:46 -- target/ns_hotplug_stress.sh@44 -- # kill -0 78858 00:38:27.364 13:00:46 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:27.623 13:00:46 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:27.881 13:00:47 -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:38:27.881 13:00:47 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:38:28.139 true 00:38:28.139 13:00:47 -- target/ns_hotplug_stress.sh@44 -- # kill -0 78858 00:38:28.139 13:00:47 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:29.074 13:00:48 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:29.333 13:00:48 -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:38:29.333 13:00:48 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:38:29.591 true 00:38:29.591 13:00:48 -- target/ns_hotplug_stress.sh@44 -- # kill -0 78858 00:38:29.591 13:00:48 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:29.850 13:00:49 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:30.109 13:00:49 -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:38:30.109 13:00:49 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:38:30.368 true 00:38:30.368 13:00:49 -- target/ns_hotplug_stress.sh@44 -- # kill -0 78858 00:38:30.368 13:00:49 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:30.627 13:00:49 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:30.886 13:00:50 -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:38:30.886 13:00:50 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:38:31.144 true 00:38:31.144 13:00:50 -- target/ns_hotplug_stress.sh@44 -- # kill -0 78858 00:38:31.144 13:00:50 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:32.153 13:00:51 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:32.153 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:32.153 13:00:51 -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:38:32.153 13:00:51 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:38:32.411 true 00:38:32.411 13:00:51 -- target/ns_hotplug_stress.sh@44 -- # kill -0 78858 00:38:32.411 13:00:51 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:32.669 13:00:51 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:32.927 13:00:52 -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:38:32.927 13:00:52 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:38:33.186 true 00:38:33.186 13:00:52 -- target/ns_hotplug_stress.sh@44 -- # kill -0 78858 00:38:33.186 13:00:52 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:34.119 13:00:53 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:34.377 13:00:53 -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:38:34.377 13:00:53 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:38:34.378 true 00:38:34.378 13:00:53 -- target/ns_hotplug_stress.sh@44 -- # kill -0 78858 00:38:34.378 13:00:53 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:34.636 13:00:54 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:34.894 13:00:54 -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:38:34.894 13:00:54 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:38:35.152 true 00:38:35.152 13:00:54 -- target/ns_hotplug_stress.sh@44 -- # kill -0 78858 00:38:35.152 13:00:54 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:36.084 13:00:55 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:36.342 13:00:55 -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:38:36.342 13:00:55 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:38:36.342 true 00:38:36.342 13:00:55 -- target/ns_hotplug_stress.sh@44 -- # kill -0 78858 00:38:36.342 13:00:55 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:36.601 13:00:55 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:36.859 13:00:56 -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:38:36.859 13:00:56 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:38:37.117 true 00:38:37.117 13:00:56 -- target/ns_hotplug_stress.sh@44 -- # kill -0 78858 00:38:37.117 13:00:56 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:38.054 Initializing NVMe Controllers 00:38:38.054 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:38.054 Controller IO queue size 128, less than required. 00:38:38.054 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:38:38.054 Controller IO queue size 128, less than required. 00:38:38.054 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:38:38.054 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:38:38.054 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:38:38.054 Initialization complete. Launching workers. 00:38:38.054 ======================================================== 00:38:38.054 Latency(us) 00:38:38.054 Device Information : IOPS MiB/s Average min max 00:38:38.054 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 333.97 0.16 179522.34 2841.60 1129535.88 00:38:38.054 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 10274.07 5.02 12458.85 3167.14 643074.87 00:38:38.054 ======================================================== 00:38:38.054 Total : 10608.03 5.18 17718.42 2841.60 1129535.88 00:38:38.054 00:38:38.054 13:00:57 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:38.312 13:00:57 -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:38:38.312 13:00:57 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:38:38.571 true 00:38:38.571 13:00:57 -- target/ns_hotplug_stress.sh@44 -- # kill -0 78858 00:38:38.571 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (78858) - No such process 00:38:38.571 13:00:57 -- target/ns_hotplug_stress.sh@53 -- # wait 78858 00:38:38.571 13:00:57 -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:38.830 13:00:57 -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:38.830 13:00:58 -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:38:38.830 13:00:58 -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:38:38.830 13:00:58 -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:38:38.830 13:00:58 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:38.831 13:00:58 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:38:39.090 null0 00:38:39.090 13:00:58 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:39.090 13:00:58 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:39.090 13:00:58 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:38:39.348 null1 00:38:39.607 13:00:58 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:39.607 13:00:58 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:39.607 13:00:58 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:38:39.607 null2 00:38:39.607 13:00:59 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:39.607 13:00:59 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:39.607 13:00:59 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:38:39.866 null3 00:38:39.866 13:00:59 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:39.866 13:00:59 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:39.866 13:00:59 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:38:40.125 null4 00:38:40.125 13:00:59 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:40.125 13:00:59 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:40.125 13:00:59 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:38:40.384 null5 00:38:40.384 13:00:59 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:40.384 13:00:59 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:40.384 13:00:59 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:38:40.643 null6 00:38:40.643 13:00:59 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:40.643 13:00:59 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:40.643 13:00:59 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:38:40.902 null7 00:38:40.902 13:01:00 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:40.902 13:01:00 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:40.902 13:01:00 -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:38:40.902 13:01:00 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:40.902 13:01:00 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:40.902 13:01:00 -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:38:40.902 13:01:00 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:40.902 13:01:00 -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:38:40.902 13:01:00 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:40.902 13:01:00 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:40.902 13:01:00 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:40.902 13:01:00 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:40.902 13:01:00 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:40.902 13:01:00 -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:38:40.902 13:01:00 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:40.902 13:01:00 -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:38:40.902 13:01:00 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:40.902 13:01:00 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:40.902 13:01:00 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:40.902 13:01:00 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:40.902 13:01:00 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:40.902 13:01:00 -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:38:40.902 13:01:00 -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:38:40.902 13:01:00 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:40.902 13:01:00 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:40.902 13:01:00 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:40.902 13:01:00 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:40.902 13:01:00 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:40.902 13:01:00 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:40.902 13:01:00 -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:38:40.902 13:01:00 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:40.902 13:01:00 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:40.902 13:01:00 -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:38:40.902 13:01:00 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:40.902 13:01:00 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:40.902 13:01:00 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:40.902 13:01:00 -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:38:40.902 13:01:00 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:40.902 13:01:00 -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:38:40.902 13:01:00 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:40.902 13:01:00 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:40.902 13:01:00 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:40.902 13:01:00 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:40.902 13:01:00 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:40.902 13:01:00 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:40.902 13:01:00 -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:38:40.902 13:01:00 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:40.902 13:01:00 -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:38:40.902 13:01:00 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:40.902 13:01:00 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:40.902 13:01:00 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:40.902 13:01:00 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:40.902 13:01:00 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:40.902 13:01:00 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:40.902 13:01:00 -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:38:40.902 13:01:00 -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:38:40.902 13:01:00 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:40.902 13:01:00 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:40.902 13:01:00 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:40.902 13:01:00 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:40.902 13:01:00 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:40.902 13:01:00 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:40.902 13:01:00 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:40.902 13:01:00 -- target/ns_hotplug_stress.sh@66 -- # wait 79912 79913 79916 79918 79919 79922 79923 79926 00:38:40.902 13:01:00 -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:38:40.902 13:01:00 -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:38:40.902 13:01:00 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:40.902 13:01:00 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:40.902 13:01:00 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:41.161 13:01:00 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:41.161 13:01:00 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:41.161 13:01:00 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:41.161 13:01:00 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:41.161 13:01:00 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:41.161 13:01:00 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:41.161 13:01:00 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:41.161 13:01:00 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:41.419 13:01:00 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:41.419 13:01:00 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:41.419 13:01:00 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:41.419 13:01:00 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:41.419 13:01:00 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:41.419 13:01:00 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:41.419 13:01:00 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:41.420 13:01:00 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:41.420 13:01:00 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:41.420 13:01:00 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:41.420 13:01:00 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:41.420 13:01:00 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:41.420 13:01:00 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:41.420 13:01:00 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:41.420 13:01:00 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:41.420 13:01:00 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:41.420 13:01:00 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:41.420 13:01:00 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:41.420 13:01:00 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:41.420 13:01:00 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:41.420 13:01:00 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:41.678 13:01:00 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:41.678 13:01:00 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:41.678 13:01:00 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:41.678 13:01:00 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:41.678 13:01:00 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:41.678 13:01:00 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:41.678 13:01:00 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:41.678 13:01:01 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:41.678 13:01:01 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:41.937 13:01:01 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:41.937 13:01:01 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:41.937 13:01:01 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:41.937 13:01:01 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:41.937 13:01:01 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:41.937 13:01:01 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:41.937 13:01:01 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:41.937 13:01:01 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:41.937 13:01:01 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:41.937 13:01:01 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:41.937 13:01:01 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:41.937 13:01:01 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:41.937 13:01:01 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:41.937 13:01:01 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:41.937 13:01:01 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:41.937 13:01:01 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:41.937 13:01:01 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:41.937 13:01:01 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:41.937 13:01:01 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:41.937 13:01:01 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:42.195 13:01:01 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:42.195 13:01:01 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:42.195 13:01:01 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:42.195 13:01:01 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:42.196 13:01:01 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:42.196 13:01:01 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:42.196 13:01:01 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:42.196 13:01:01 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:42.196 13:01:01 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:42.196 13:01:01 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:42.196 13:01:01 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:42.196 13:01:01 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:42.455 13:01:01 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:42.455 13:01:01 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:42.455 13:01:01 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:42.455 13:01:01 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:42.455 13:01:01 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:42.455 13:01:01 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:42.455 13:01:01 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:42.455 13:01:01 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:42.455 13:01:01 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:42.455 13:01:01 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:42.455 13:01:01 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:42.455 13:01:01 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:42.455 13:01:01 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:42.455 13:01:01 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:42.455 13:01:01 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:42.455 13:01:01 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:42.455 13:01:01 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:42.455 13:01:01 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:42.455 13:01:01 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:42.455 13:01:01 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:42.714 13:01:01 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:42.714 13:01:01 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:42.714 13:01:01 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:42.714 13:01:01 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:42.714 13:01:01 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:42.714 13:01:01 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:42.714 13:01:02 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:42.714 13:01:02 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:42.714 13:01:02 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:42.714 13:01:02 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:42.714 13:01:02 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:42.714 13:01:02 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:42.714 13:01:02 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:42.714 13:01:02 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:42.714 13:01:02 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:42.714 13:01:02 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:42.714 13:01:02 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:42.974 13:01:02 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:42.974 13:01:02 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:42.974 13:01:02 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:42.974 13:01:02 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:42.974 13:01:02 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:42.974 13:01:02 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:42.974 13:01:02 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:42.974 13:01:02 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:42.974 13:01:02 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:42.974 13:01:02 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:42.974 13:01:02 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:42.974 13:01:02 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:42.974 13:01:02 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:42.974 13:01:02 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:43.232 13:01:02 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:43.232 13:01:02 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:43.232 13:01:02 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:43.232 13:01:02 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:43.232 13:01:02 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:43.232 13:01:02 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:43.232 13:01:02 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:43.233 13:01:02 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:43.233 13:01:02 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:43.233 13:01:02 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:43.233 13:01:02 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:43.233 13:01:02 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:43.233 13:01:02 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:43.233 13:01:02 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:43.233 13:01:02 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:43.233 13:01:02 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:43.233 13:01:02 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:43.233 13:01:02 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:43.491 13:01:02 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:43.491 13:01:02 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:43.491 13:01:02 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:43.491 13:01:02 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:43.491 13:01:02 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:43.491 13:01:02 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:43.491 13:01:02 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:43.491 13:01:02 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:43.491 13:01:02 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:43.491 13:01:02 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:43.491 13:01:02 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:43.491 13:01:02 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:43.491 13:01:02 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:43.492 13:01:02 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:43.751 13:01:02 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:43.751 13:01:02 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:43.751 13:01:02 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:43.751 13:01:02 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:43.751 13:01:03 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:43.751 13:01:03 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:43.751 13:01:03 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:43.751 13:01:03 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:43.751 13:01:03 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:43.751 13:01:03 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:43.751 13:01:03 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:43.751 13:01:03 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:43.751 13:01:03 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:43.751 13:01:03 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:43.751 13:01:03 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:43.751 13:01:03 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:43.751 13:01:03 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:43.751 13:01:03 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:44.010 13:01:03 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:44.010 13:01:03 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:44.010 13:01:03 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:44.010 13:01:03 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:44.010 13:01:03 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:44.010 13:01:03 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:44.010 13:01:03 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:44.010 13:01:03 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:44.010 13:01:03 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:44.010 13:01:03 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:44.010 13:01:03 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:44.270 13:01:03 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:44.270 13:01:03 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:44.270 13:01:03 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:44.270 13:01:03 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:44.270 13:01:03 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:44.270 13:01:03 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:44.270 13:01:03 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:44.270 13:01:03 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:44.270 13:01:03 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:44.270 13:01:03 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:44.270 13:01:03 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:44.270 13:01:03 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:44.270 13:01:03 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:44.270 13:01:03 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:44.270 13:01:03 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:44.270 13:01:03 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:44.270 13:01:03 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:44.270 13:01:03 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:44.270 13:01:03 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:44.270 13:01:03 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:44.270 13:01:03 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:44.270 13:01:03 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:44.270 13:01:03 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:44.556 13:01:03 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:44.556 13:01:03 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:44.556 13:01:03 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:44.556 13:01:03 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:44.556 13:01:03 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:44.556 13:01:03 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:44.556 13:01:03 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:44.556 13:01:03 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:44.556 13:01:03 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:44.556 13:01:03 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:44.556 13:01:03 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:44.556 13:01:03 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:44.815 13:01:03 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:44.815 13:01:04 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:44.815 13:01:04 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:44.815 13:01:04 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:44.815 13:01:04 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:44.815 13:01:04 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:44.815 13:01:04 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:44.815 13:01:04 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:44.815 13:01:04 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:44.815 13:01:04 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:44.815 13:01:04 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:44.815 13:01:04 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:44.815 13:01:04 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:44.815 13:01:04 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:44.815 13:01:04 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:44.815 13:01:04 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:44.815 13:01:04 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:44.815 13:01:04 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:44.815 13:01:04 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:44.815 13:01:04 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:45.075 13:01:04 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:45.075 13:01:04 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:45.075 13:01:04 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:45.075 13:01:04 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:45.075 13:01:04 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:45.075 13:01:04 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:45.075 13:01:04 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:45.075 13:01:04 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:45.075 13:01:04 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:45.075 13:01:04 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:45.075 13:01:04 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:45.075 13:01:04 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:45.075 13:01:04 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:45.334 13:01:04 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:45.334 13:01:04 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:45.334 13:01:04 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:45.334 13:01:04 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:45.334 13:01:04 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:45.334 13:01:04 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:45.335 13:01:04 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:45.335 13:01:04 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:45.335 13:01:04 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:45.335 13:01:04 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:45.335 13:01:04 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:45.335 13:01:04 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:45.335 13:01:04 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:45.335 13:01:04 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:45.335 13:01:04 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:45.335 13:01:04 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:45.335 13:01:04 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:45.594 13:01:04 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:45.594 13:01:04 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:45.594 13:01:04 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:45.594 13:01:04 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:45.594 13:01:04 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:45.594 13:01:04 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:45.594 13:01:04 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:45.594 13:01:04 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:45.594 13:01:04 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:45.594 13:01:04 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:45.594 13:01:04 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:45.594 13:01:04 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:45.594 13:01:04 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:45.594 13:01:04 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:45.854 13:01:05 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:45.854 13:01:05 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:45.854 13:01:05 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:45.854 13:01:05 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:45.854 13:01:05 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:45.854 13:01:05 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:45.854 13:01:05 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:45.854 13:01:05 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:45.854 13:01:05 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:45.854 13:01:05 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:45.854 13:01:05 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:45.854 13:01:05 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:45.854 13:01:05 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:45.854 13:01:05 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:45.854 13:01:05 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:46.113 13:01:05 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:46.113 13:01:05 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:46.113 13:01:05 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:46.113 13:01:05 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:46.113 13:01:05 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:46.113 13:01:05 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:46.113 13:01:05 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:46.113 13:01:05 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:46.113 13:01:05 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:46.372 13:01:05 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:46.372 13:01:05 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:46.372 13:01:05 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:46.372 13:01:05 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:46.372 13:01:05 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:46.372 13:01:05 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:46.372 13:01:05 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:46.372 13:01:05 -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:38:46.372 13:01:05 -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:38:46.372 13:01:05 -- nvmf/common.sh@476 -- # nvmfcleanup 00:38:46.372 13:01:05 -- nvmf/common.sh@116 -- # sync 00:38:46.631 13:01:05 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:38:46.631 13:01:05 -- nvmf/common.sh@119 -- # set +e 00:38:46.631 13:01:05 -- nvmf/common.sh@120 -- # for i in {1..20} 00:38:46.631 13:01:05 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:38:46.631 rmmod nvme_tcp 00:38:46.631 rmmod nvme_fabrics 00:38:46.631 rmmod nvme_keyring 00:38:46.631 13:01:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:38:46.631 13:01:05 -- nvmf/common.sh@123 -- # set -e 00:38:46.631 13:01:05 -- nvmf/common.sh@124 -- # return 0 00:38:46.631 13:01:05 -- nvmf/common.sh@477 -- # '[' -n 78727 ']' 00:38:46.631 13:01:05 -- nvmf/common.sh@478 -- # killprocess 78727 00:38:46.631 13:01:05 -- common/autotest_common.sh@926 -- # '[' -z 78727 ']' 00:38:46.631 13:01:05 -- common/autotest_common.sh@930 -- # kill -0 78727 00:38:46.631 13:01:05 -- common/autotest_common.sh@931 -- # uname 00:38:46.631 13:01:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:38:46.631 13:01:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 78727 00:38:46.631 13:01:05 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:38:46.631 13:01:05 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:38:46.631 killing process with pid 78727 00:38:46.631 13:01:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 78727' 00:38:46.631 13:01:05 -- common/autotest_common.sh@945 -- # kill 78727 00:38:46.631 13:01:05 -- common/autotest_common.sh@950 -- # wait 78727 00:38:46.891 13:01:06 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:38:46.891 13:01:06 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:38:46.891 13:01:06 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:38:46.891 13:01:06 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:38:46.891 13:01:06 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:38:46.891 13:01:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:46.891 13:01:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:38:46.891 13:01:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:46.891 13:01:06 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:38:46.891 00:38:46.891 real 0m43.132s 00:38:46.891 user 3m26.040s 00:38:46.891 sys 0m12.480s 00:38:46.891 13:01:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:46.891 13:01:06 -- common/autotest_common.sh@10 -- # set +x 00:38:46.891 ************************************ 00:38:46.891 END TEST nvmf_ns_hotplug_stress 00:38:46.891 ************************************ 00:38:46.891 13:01:06 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:38:46.891 13:01:06 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:38:46.891 13:01:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:38:46.891 13:01:06 -- common/autotest_common.sh@10 -- # set +x 00:38:46.891 ************************************ 00:38:46.891 START TEST nvmf_connect_stress 00:38:46.891 ************************************ 00:38:46.891 13:01:06 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:38:46.891 * Looking for test storage... 00:38:46.891 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:38:46.891 13:01:06 -- target/connect_stress.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:38:46.891 13:01:06 -- nvmf/common.sh@7 -- # uname -s 00:38:46.891 13:01:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:46.891 13:01:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:46.891 13:01:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:46.891 13:01:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:46.891 13:01:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:46.891 13:01:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:46.891 13:01:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:46.891 13:01:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:46.891 13:01:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:46.891 13:01:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:46.891 13:01:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:864ae4a3-cd96-439b-8ef2-6dab4d992115 00:38:46.891 13:01:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=864ae4a3-cd96-439b-8ef2-6dab4d992115 00:38:46.891 13:01:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:46.891 13:01:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:46.891 13:01:06 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:38:46.891 13:01:06 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:38:46.891 13:01:06 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:46.891 13:01:06 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:46.891 13:01:06 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:46.891 13:01:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:46.891 13:01:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:46.891 13:01:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:46.891 13:01:06 -- paths/export.sh@5 -- # export PATH 00:38:46.891 13:01:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:46.891 13:01:06 -- nvmf/common.sh@46 -- # : 0 00:38:46.891 13:01:06 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:38:46.891 13:01:06 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:38:46.891 13:01:06 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:38:46.891 13:01:06 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:46.891 13:01:06 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:46.891 13:01:06 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:38:46.891 13:01:06 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:38:46.891 13:01:06 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:38:46.891 13:01:06 -- target/connect_stress.sh@12 -- # nvmftestinit 00:38:46.891 13:01:06 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:38:46.892 13:01:06 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:46.892 13:01:06 -- nvmf/common.sh@436 -- # prepare_net_devs 00:38:46.892 13:01:06 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:38:46.892 13:01:06 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:38:46.892 13:01:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:46.892 13:01:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:38:46.892 13:01:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:46.892 13:01:06 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:38:46.892 13:01:06 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:38:46.892 13:01:06 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:38:46.892 13:01:06 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:38:46.892 13:01:06 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:38:46.892 13:01:06 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:38:46.892 13:01:06 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:46.892 13:01:06 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:46.892 13:01:06 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:38:46.892 13:01:06 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:38:46.892 13:01:06 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:38:46.892 13:01:06 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:38:46.892 13:01:06 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:38:46.892 13:01:06 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:46.892 13:01:06 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:38:46.892 13:01:06 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:38:46.892 13:01:06 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:38:46.892 13:01:06 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:38:46.892 13:01:06 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:38:46.892 13:01:06 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:38:46.892 Cannot find device "nvmf_tgt_br" 00:38:46.892 13:01:06 -- nvmf/common.sh@154 -- # true 00:38:46.892 13:01:06 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:38:46.892 Cannot find device "nvmf_tgt_br2" 00:38:46.892 13:01:06 -- nvmf/common.sh@155 -- # true 00:38:46.892 13:01:06 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:38:46.892 13:01:06 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:38:46.892 Cannot find device "nvmf_tgt_br" 00:38:46.892 13:01:06 -- nvmf/common.sh@157 -- # true 00:38:46.892 13:01:06 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:38:47.152 Cannot find device "nvmf_tgt_br2" 00:38:47.152 13:01:06 -- nvmf/common.sh@158 -- # true 00:38:47.152 13:01:06 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:38:47.152 13:01:06 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:38:47.152 13:01:06 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:38:47.152 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:38:47.152 13:01:06 -- nvmf/common.sh@161 -- # true 00:38:47.152 13:01:06 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:38:47.152 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:38:47.152 13:01:06 -- nvmf/common.sh@162 -- # true 00:38:47.152 13:01:06 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:38:47.152 13:01:06 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:38:47.152 13:01:06 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:38:47.152 13:01:06 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:38:47.152 13:01:06 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:38:47.152 13:01:06 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:38:47.152 13:01:06 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:38:47.152 13:01:06 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:38:47.152 13:01:06 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:38:47.152 13:01:06 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:38:47.152 13:01:06 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:38:47.152 13:01:06 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:38:47.152 13:01:06 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:38:47.152 13:01:06 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:38:47.152 13:01:06 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:38:47.152 13:01:06 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:38:47.152 13:01:06 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:38:47.152 13:01:06 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:38:47.152 13:01:06 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:38:47.152 13:01:06 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:38:47.152 13:01:06 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:38:47.152 13:01:06 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:38:47.152 13:01:06 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:38:47.152 13:01:06 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:38:47.152 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:47.152 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.090 ms 00:38:47.152 00:38:47.152 --- 10.0.0.2 ping statistics --- 00:38:47.152 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:47.152 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:38:47.152 13:01:06 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:38:47.152 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:38:47.152 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.033 ms 00:38:47.152 00:38:47.152 --- 10.0.0.3 ping statistics --- 00:38:47.152 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:47.152 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:38:47.152 13:01:06 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:38:47.152 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:47.152 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:38:47.152 00:38:47.152 --- 10.0.0.1 ping statistics --- 00:38:47.152 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:47.152 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:38:47.152 13:01:06 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:47.152 13:01:06 -- nvmf/common.sh@421 -- # return 0 00:38:47.152 13:01:06 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:38:47.152 13:01:06 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:47.152 13:01:06 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:38:47.152 13:01:06 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:38:47.152 13:01:06 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:47.152 13:01:06 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:38:47.152 13:01:06 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:38:47.152 13:01:06 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:38:47.152 13:01:06 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:38:47.152 13:01:06 -- common/autotest_common.sh@712 -- # xtrace_disable 00:38:47.152 13:01:06 -- common/autotest_common.sh@10 -- # set +x 00:38:47.152 13:01:06 -- nvmf/common.sh@469 -- # nvmfpid=81217 00:38:47.152 13:01:06 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:38:47.152 13:01:06 -- nvmf/common.sh@470 -- # waitforlisten 81217 00:38:47.152 13:01:06 -- common/autotest_common.sh@819 -- # '[' -z 81217 ']' 00:38:47.152 13:01:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:47.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:47.152 13:01:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:38:47.152 13:01:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:47.152 13:01:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:38:47.152 13:01:06 -- common/autotest_common.sh@10 -- # set +x 00:38:47.412 [2024-07-22 13:01:06.601477] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:38:47.412 [2024-07-22 13:01:06.601593] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:47.412 [2024-07-22 13:01:06.734581] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:38:47.412 [2024-07-22 13:01:06.811321] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:38:47.412 [2024-07-22 13:01:06.811527] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:47.412 [2024-07-22 13:01:06.811555] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:47.412 [2024-07-22 13:01:06.811564] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:47.412 [2024-07-22 13:01:06.811657] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:38:47.412 [2024-07-22 13:01:06.811906] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:38:47.412 [2024-07-22 13:01:06.811920] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:38:48.350 13:01:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:38:48.350 13:01:07 -- common/autotest_common.sh@852 -- # return 0 00:38:48.350 13:01:07 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:38:48.350 13:01:07 -- common/autotest_common.sh@718 -- # xtrace_disable 00:38:48.350 13:01:07 -- common/autotest_common.sh@10 -- # set +x 00:38:48.350 13:01:07 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:48.350 13:01:07 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:48.350 13:01:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:38:48.350 13:01:07 -- common/autotest_common.sh@10 -- # set +x 00:38:48.350 [2024-07-22 13:01:07.583854] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:48.350 13:01:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:38:48.350 13:01:07 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:38:48.350 13:01:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:38:48.350 13:01:07 -- common/autotest_common.sh@10 -- # set +x 00:38:48.350 13:01:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:38:48.350 13:01:07 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:48.350 13:01:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:38:48.350 13:01:07 -- common/autotest_common.sh@10 -- # set +x 00:38:48.350 [2024-07-22 13:01:07.601626] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:48.350 13:01:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:38:48.350 13:01:07 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:38:48.350 13:01:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:38:48.350 13:01:07 -- common/autotest_common.sh@10 -- # set +x 00:38:48.350 NULL1 00:38:48.350 13:01:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:38:48.350 13:01:07 -- target/connect_stress.sh@21 -- # PERF_PID=81269 00:38:48.350 13:01:07 -- target/connect_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:38:48.350 13:01:07 -- target/connect_stress.sh@23 -- # rpcs=/home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:38:48.350 13:01:07 -- target/connect_stress.sh@25 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:38:48.350 13:01:07 -- target/connect_stress.sh@27 -- # seq 1 20 00:38:48.350 13:01:07 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:38:48.350 13:01:07 -- target/connect_stress.sh@28 -- # cat 00:38:48.350 13:01:07 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:38:48.350 13:01:07 -- target/connect_stress.sh@28 -- # cat 00:38:48.350 13:01:07 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:38:48.350 13:01:07 -- target/connect_stress.sh@28 -- # cat 00:38:48.350 13:01:07 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:38:48.350 13:01:07 -- target/connect_stress.sh@28 -- # cat 00:38:48.350 13:01:07 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:38:48.350 13:01:07 -- target/connect_stress.sh@28 -- # cat 00:38:48.350 13:01:07 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:38:48.350 13:01:07 -- target/connect_stress.sh@28 -- # cat 00:38:48.350 13:01:07 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:38:48.350 13:01:07 -- target/connect_stress.sh@28 -- # cat 00:38:48.350 13:01:07 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:38:48.350 13:01:07 -- target/connect_stress.sh@28 -- # cat 00:38:48.350 13:01:07 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:38:48.350 13:01:07 -- target/connect_stress.sh@28 -- # cat 00:38:48.350 13:01:07 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:38:48.350 13:01:07 -- target/connect_stress.sh@28 -- # cat 00:38:48.350 13:01:07 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:38:48.350 13:01:07 -- target/connect_stress.sh@28 -- # cat 00:38:48.350 13:01:07 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:38:48.350 13:01:07 -- target/connect_stress.sh@28 -- # cat 00:38:48.350 13:01:07 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:38:48.350 13:01:07 -- target/connect_stress.sh@28 -- # cat 00:38:48.350 13:01:07 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:38:48.350 13:01:07 -- target/connect_stress.sh@28 -- # cat 00:38:48.350 13:01:07 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:38:48.350 13:01:07 -- target/connect_stress.sh@28 -- # cat 00:38:48.350 13:01:07 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:38:48.350 13:01:07 -- target/connect_stress.sh@28 -- # cat 00:38:48.350 13:01:07 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:38:48.350 13:01:07 -- target/connect_stress.sh@28 -- # cat 00:38:48.350 13:01:07 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:38:48.350 13:01:07 -- target/connect_stress.sh@28 -- # cat 00:38:48.350 13:01:07 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:38:48.350 13:01:07 -- target/connect_stress.sh@28 -- # cat 00:38:48.350 13:01:07 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:38:48.350 13:01:07 -- target/connect_stress.sh@28 -- # cat 00:38:48.350 13:01:07 -- target/connect_stress.sh@34 -- # kill -0 81269 00:38:48.350 13:01:07 -- target/connect_stress.sh@35 -- # rpc_cmd 00:38:48.350 13:01:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:38:48.350 13:01:07 -- common/autotest_common.sh@10 -- # set +x 00:38:48.609 13:01:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:38:48.609 13:01:08 -- target/connect_stress.sh@34 -- # kill -0 81269 00:38:48.609 13:01:08 -- target/connect_stress.sh@35 -- # rpc_cmd 00:38:48.609 13:01:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:38:48.609 13:01:08 -- common/autotest_common.sh@10 -- # set +x 00:38:49.175 13:01:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:38:49.176 13:01:08 -- target/connect_stress.sh@34 -- # kill -0 81269 00:38:49.176 13:01:08 -- target/connect_stress.sh@35 -- # rpc_cmd 00:38:49.176 13:01:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:38:49.176 13:01:08 -- common/autotest_common.sh@10 -- # set +x 00:38:49.434 13:01:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:38:49.434 13:01:08 -- target/connect_stress.sh@34 -- # kill -0 81269 00:38:49.434 13:01:08 -- target/connect_stress.sh@35 -- # rpc_cmd 00:38:49.434 13:01:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:38:49.434 13:01:08 -- common/autotest_common.sh@10 -- # set +x 00:38:49.693 13:01:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:38:49.693 13:01:08 -- target/connect_stress.sh@34 -- # kill -0 81269 00:38:49.693 13:01:08 -- target/connect_stress.sh@35 -- # rpc_cmd 00:38:49.693 13:01:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:38:49.693 13:01:08 -- common/autotest_common.sh@10 -- # set +x 00:38:49.951 13:01:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:38:49.951 13:01:09 -- target/connect_stress.sh@34 -- # kill -0 81269 00:38:49.951 13:01:09 -- target/connect_stress.sh@35 -- # rpc_cmd 00:38:49.951 13:01:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:38:49.951 13:01:09 -- common/autotest_common.sh@10 -- # set +x 00:38:50.210 13:01:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:38:50.210 13:01:09 -- target/connect_stress.sh@34 -- # kill -0 81269 00:38:50.210 13:01:09 -- target/connect_stress.sh@35 -- # rpc_cmd 00:38:50.210 13:01:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:38:50.210 13:01:09 -- common/autotest_common.sh@10 -- # set +x 00:38:50.777 13:01:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:38:50.777 13:01:09 -- target/connect_stress.sh@34 -- # kill -0 81269 00:38:50.777 13:01:09 -- target/connect_stress.sh@35 -- # rpc_cmd 00:38:50.777 13:01:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:38:50.777 13:01:09 -- common/autotest_common.sh@10 -- # set +x 00:38:51.035 13:01:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:38:51.035 13:01:10 -- target/connect_stress.sh@34 -- # kill -0 81269 00:38:51.035 13:01:10 -- target/connect_stress.sh@35 -- # rpc_cmd 00:38:51.035 13:01:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:38:51.035 13:01:10 -- common/autotest_common.sh@10 -- # set +x 00:38:51.293 13:01:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:38:51.293 13:01:10 -- target/connect_stress.sh@34 -- # kill -0 81269 00:38:51.293 13:01:10 -- target/connect_stress.sh@35 -- # rpc_cmd 00:38:51.293 13:01:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:38:51.293 13:01:10 -- common/autotest_common.sh@10 -- # set +x 00:38:51.552 13:01:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:38:51.552 13:01:10 -- target/connect_stress.sh@34 -- # kill -0 81269 00:38:51.552 13:01:10 -- target/connect_stress.sh@35 -- # rpc_cmd 00:38:51.552 13:01:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:38:51.552 13:01:10 -- common/autotest_common.sh@10 -- # set +x 00:38:52.119 13:01:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:38:52.119 13:01:11 -- target/connect_stress.sh@34 -- # kill -0 81269 00:38:52.119 13:01:11 -- target/connect_stress.sh@35 -- # rpc_cmd 00:38:52.119 13:01:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:38:52.119 13:01:11 -- common/autotest_common.sh@10 -- # set +x 00:38:52.378 13:01:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:38:52.378 13:01:11 -- target/connect_stress.sh@34 -- # kill -0 81269 00:38:52.378 13:01:11 -- target/connect_stress.sh@35 -- # rpc_cmd 00:38:52.378 13:01:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:38:52.378 13:01:11 -- common/autotest_common.sh@10 -- # set +x 00:38:52.636 13:01:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:38:52.636 13:01:11 -- target/connect_stress.sh@34 -- # kill -0 81269 00:38:52.636 13:01:11 -- target/connect_stress.sh@35 -- # rpc_cmd 00:38:52.636 13:01:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:38:52.636 13:01:11 -- common/autotest_common.sh@10 -- # set +x 00:38:52.894 13:01:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:38:52.894 13:01:12 -- target/connect_stress.sh@34 -- # kill -0 81269 00:38:52.894 13:01:12 -- target/connect_stress.sh@35 -- # rpc_cmd 00:38:52.894 13:01:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:38:52.894 13:01:12 -- common/autotest_common.sh@10 -- # set +x 00:38:53.152 13:01:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:38:53.152 13:01:12 -- target/connect_stress.sh@34 -- # kill -0 81269 00:38:53.152 13:01:12 -- target/connect_stress.sh@35 -- # rpc_cmd 00:38:53.152 13:01:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:38:53.152 13:01:12 -- common/autotest_common.sh@10 -- # set +x 00:38:53.720 13:01:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:38:53.720 13:01:12 -- target/connect_stress.sh@34 -- # kill -0 81269 00:38:53.720 13:01:12 -- target/connect_stress.sh@35 -- # rpc_cmd 00:38:53.720 13:01:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:38:53.720 13:01:12 -- common/autotest_common.sh@10 -- # set +x 00:38:53.979 13:01:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:38:53.979 13:01:13 -- target/connect_stress.sh@34 -- # kill -0 81269 00:38:53.979 13:01:13 -- target/connect_stress.sh@35 -- # rpc_cmd 00:38:53.979 13:01:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:38:53.979 13:01:13 -- common/autotest_common.sh@10 -- # set +x 00:38:54.237 13:01:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:38:54.237 13:01:13 -- target/connect_stress.sh@34 -- # kill -0 81269 00:38:54.237 13:01:13 -- target/connect_stress.sh@35 -- # rpc_cmd 00:38:54.237 13:01:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:38:54.237 13:01:13 -- common/autotest_common.sh@10 -- # set +x 00:38:54.527 13:01:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:38:54.527 13:01:13 -- target/connect_stress.sh@34 -- # kill -0 81269 00:38:54.527 13:01:13 -- target/connect_stress.sh@35 -- # rpc_cmd 00:38:54.527 13:01:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:38:54.527 13:01:13 -- common/autotest_common.sh@10 -- # set +x 00:38:54.785 13:01:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:38:54.785 13:01:14 -- target/connect_stress.sh@34 -- # kill -0 81269 00:38:54.785 13:01:14 -- target/connect_stress.sh@35 -- # rpc_cmd 00:38:54.786 13:01:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:38:54.786 13:01:14 -- common/autotest_common.sh@10 -- # set +x 00:38:55.044 13:01:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:38:55.044 13:01:14 -- target/connect_stress.sh@34 -- # kill -0 81269 00:38:55.044 13:01:14 -- target/connect_stress.sh@35 -- # rpc_cmd 00:38:55.044 13:01:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:38:55.044 13:01:14 -- common/autotest_common.sh@10 -- # set +x 00:38:55.612 13:01:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:38:55.612 13:01:14 -- target/connect_stress.sh@34 -- # kill -0 81269 00:38:55.612 13:01:14 -- target/connect_stress.sh@35 -- # rpc_cmd 00:38:55.612 13:01:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:38:55.612 13:01:14 -- common/autotest_common.sh@10 -- # set +x 00:38:55.870 13:01:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:38:55.870 13:01:15 -- target/connect_stress.sh@34 -- # kill -0 81269 00:38:55.870 13:01:15 -- target/connect_stress.sh@35 -- # rpc_cmd 00:38:55.870 13:01:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:38:55.870 13:01:15 -- common/autotest_common.sh@10 -- # set +x 00:38:56.128 13:01:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:38:56.128 13:01:15 -- target/connect_stress.sh@34 -- # kill -0 81269 00:38:56.128 13:01:15 -- target/connect_stress.sh@35 -- # rpc_cmd 00:38:56.128 13:01:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:38:56.128 13:01:15 -- common/autotest_common.sh@10 -- # set +x 00:38:56.387 13:01:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:38:56.387 13:01:15 -- target/connect_stress.sh@34 -- # kill -0 81269 00:38:56.387 13:01:15 -- target/connect_stress.sh@35 -- # rpc_cmd 00:38:56.387 13:01:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:38:56.387 13:01:15 -- common/autotest_common.sh@10 -- # set +x 00:38:56.646 13:01:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:38:56.646 13:01:16 -- target/connect_stress.sh@34 -- # kill -0 81269 00:38:56.646 13:01:16 -- target/connect_stress.sh@35 -- # rpc_cmd 00:38:56.646 13:01:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:38:56.646 13:01:16 -- common/autotest_common.sh@10 -- # set +x 00:38:57.213 13:01:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:38:57.213 13:01:16 -- target/connect_stress.sh@34 -- # kill -0 81269 00:38:57.213 13:01:16 -- target/connect_stress.sh@35 -- # rpc_cmd 00:38:57.213 13:01:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:38:57.213 13:01:16 -- common/autotest_common.sh@10 -- # set +x 00:38:57.472 13:01:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:38:57.472 13:01:16 -- target/connect_stress.sh@34 -- # kill -0 81269 00:38:57.472 13:01:16 -- target/connect_stress.sh@35 -- # rpc_cmd 00:38:57.472 13:01:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:38:57.472 13:01:16 -- common/autotest_common.sh@10 -- # set +x 00:38:57.731 13:01:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:38:57.731 13:01:17 -- target/connect_stress.sh@34 -- # kill -0 81269 00:38:57.731 13:01:17 -- target/connect_stress.sh@35 -- # rpc_cmd 00:38:57.731 13:01:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:38:57.731 13:01:17 -- common/autotest_common.sh@10 -- # set +x 00:38:57.989 13:01:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:38:57.989 13:01:17 -- target/connect_stress.sh@34 -- # kill -0 81269 00:38:57.989 13:01:17 -- target/connect_stress.sh@35 -- # rpc_cmd 00:38:57.989 13:01:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:38:57.989 13:01:17 -- common/autotest_common.sh@10 -- # set +x 00:38:58.247 13:01:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:38:58.247 13:01:17 -- target/connect_stress.sh@34 -- # kill -0 81269 00:38:58.247 13:01:17 -- target/connect_stress.sh@35 -- # rpc_cmd 00:38:58.247 13:01:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:38:58.247 13:01:17 -- common/autotest_common.sh@10 -- # set +x 00:38:58.506 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:58.765 13:01:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:38:58.765 13:01:17 -- target/connect_stress.sh@34 -- # kill -0 81269 00:38:58.765 /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (81269) - No such process 00:38:58.765 13:01:17 -- target/connect_stress.sh@38 -- # wait 81269 00:38:58.765 13:01:17 -- target/connect_stress.sh@39 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:38:58.765 13:01:17 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:38:58.765 13:01:17 -- target/connect_stress.sh@43 -- # nvmftestfini 00:38:58.765 13:01:17 -- nvmf/common.sh@476 -- # nvmfcleanup 00:38:58.765 13:01:17 -- nvmf/common.sh@116 -- # sync 00:38:58.765 13:01:18 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:38:58.765 13:01:18 -- nvmf/common.sh@119 -- # set +e 00:38:58.765 13:01:18 -- nvmf/common.sh@120 -- # for i in {1..20} 00:38:58.765 13:01:18 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:38:58.765 rmmod nvme_tcp 00:38:58.765 rmmod nvme_fabrics 00:38:58.765 rmmod nvme_keyring 00:38:58.765 13:01:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:38:58.765 13:01:18 -- nvmf/common.sh@123 -- # set -e 00:38:58.765 13:01:18 -- nvmf/common.sh@124 -- # return 0 00:38:58.765 13:01:18 -- nvmf/common.sh@477 -- # '[' -n 81217 ']' 00:38:58.765 13:01:18 -- nvmf/common.sh@478 -- # killprocess 81217 00:38:58.765 13:01:18 -- common/autotest_common.sh@926 -- # '[' -z 81217 ']' 00:38:58.765 13:01:18 -- common/autotest_common.sh@930 -- # kill -0 81217 00:38:58.765 13:01:18 -- common/autotest_common.sh@931 -- # uname 00:38:58.765 13:01:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:38:58.765 13:01:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 81217 00:38:58.765 killing process with pid 81217 00:38:58.765 13:01:18 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:38:58.765 13:01:18 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:38:58.765 13:01:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 81217' 00:38:58.765 13:01:18 -- common/autotest_common.sh@945 -- # kill 81217 00:38:58.765 13:01:18 -- common/autotest_common.sh@950 -- # wait 81217 00:38:59.024 13:01:18 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:38:59.024 13:01:18 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:38:59.024 13:01:18 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:38:59.024 13:01:18 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:38:59.024 13:01:18 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:38:59.024 13:01:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:59.024 13:01:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:38:59.024 13:01:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:59.024 13:01:18 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:38:59.024 00:38:59.024 real 0m12.180s 00:38:59.024 user 0m40.889s 00:38:59.024 sys 0m3.369s 00:38:59.024 13:01:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:59.024 13:01:18 -- common/autotest_common.sh@10 -- # set +x 00:38:59.024 ************************************ 00:38:59.024 END TEST nvmf_connect_stress 00:38:59.024 ************************************ 00:38:59.024 13:01:18 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:38:59.024 13:01:18 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:38:59.024 13:01:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:38:59.024 13:01:18 -- common/autotest_common.sh@10 -- # set +x 00:38:59.024 ************************************ 00:38:59.024 START TEST nvmf_fused_ordering 00:38:59.024 ************************************ 00:38:59.024 13:01:18 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:38:59.283 * Looking for test storage... 00:38:59.283 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:38:59.283 13:01:18 -- target/fused_ordering.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:38:59.283 13:01:18 -- nvmf/common.sh@7 -- # uname -s 00:38:59.283 13:01:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:59.283 13:01:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:59.283 13:01:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:59.283 13:01:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:59.283 13:01:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:59.283 13:01:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:59.283 13:01:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:59.283 13:01:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:59.283 13:01:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:59.283 13:01:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:59.283 13:01:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:864ae4a3-cd96-439b-8ef2-6dab4d992115 00:38:59.283 13:01:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=864ae4a3-cd96-439b-8ef2-6dab4d992115 00:38:59.283 13:01:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:59.283 13:01:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:59.283 13:01:18 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:38:59.283 13:01:18 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:38:59.283 13:01:18 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:59.283 13:01:18 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:59.283 13:01:18 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:59.283 13:01:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:59.283 13:01:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:59.283 13:01:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:59.283 13:01:18 -- paths/export.sh@5 -- # export PATH 00:38:59.283 13:01:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:59.283 13:01:18 -- nvmf/common.sh@46 -- # : 0 00:38:59.283 13:01:18 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:38:59.283 13:01:18 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:38:59.283 13:01:18 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:38:59.283 13:01:18 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:59.283 13:01:18 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:59.283 13:01:18 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:38:59.283 13:01:18 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:38:59.283 13:01:18 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:38:59.283 13:01:18 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:38:59.283 13:01:18 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:38:59.284 13:01:18 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:59.284 13:01:18 -- nvmf/common.sh@436 -- # prepare_net_devs 00:38:59.284 13:01:18 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:38:59.284 13:01:18 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:38:59.284 13:01:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:59.284 13:01:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:38:59.284 13:01:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:59.284 13:01:18 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:38:59.284 13:01:18 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:38:59.284 13:01:18 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:38:59.284 13:01:18 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:38:59.284 13:01:18 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:38:59.284 13:01:18 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:38:59.284 13:01:18 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:59.284 13:01:18 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:59.284 13:01:18 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:38:59.284 13:01:18 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:38:59.284 13:01:18 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:38:59.284 13:01:18 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:38:59.284 13:01:18 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:38:59.284 13:01:18 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:59.284 13:01:18 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:38:59.284 13:01:18 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:38:59.284 13:01:18 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:38:59.284 13:01:18 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:38:59.284 13:01:18 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:38:59.284 13:01:18 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:38:59.284 Cannot find device "nvmf_tgt_br" 00:38:59.284 13:01:18 -- nvmf/common.sh@154 -- # true 00:38:59.284 13:01:18 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:38:59.284 Cannot find device "nvmf_tgt_br2" 00:38:59.284 13:01:18 -- nvmf/common.sh@155 -- # true 00:38:59.284 13:01:18 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:38:59.284 13:01:18 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:38:59.284 Cannot find device "nvmf_tgt_br" 00:38:59.284 13:01:18 -- nvmf/common.sh@157 -- # true 00:38:59.284 13:01:18 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:38:59.284 Cannot find device "nvmf_tgt_br2" 00:38:59.284 13:01:18 -- nvmf/common.sh@158 -- # true 00:38:59.284 13:01:18 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:38:59.284 13:01:18 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:38:59.284 13:01:18 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:38:59.284 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:38:59.284 13:01:18 -- nvmf/common.sh@161 -- # true 00:38:59.284 13:01:18 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:38:59.284 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:38:59.284 13:01:18 -- nvmf/common.sh@162 -- # true 00:38:59.284 13:01:18 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:38:59.284 13:01:18 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:38:59.284 13:01:18 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:38:59.284 13:01:18 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:38:59.284 13:01:18 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:38:59.284 13:01:18 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:38:59.284 13:01:18 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:38:59.284 13:01:18 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:38:59.543 13:01:18 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:38:59.543 13:01:18 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:38:59.543 13:01:18 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:38:59.543 13:01:18 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:38:59.543 13:01:18 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:38:59.543 13:01:18 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:38:59.543 13:01:18 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:38:59.543 13:01:18 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:38:59.543 13:01:18 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:38:59.543 13:01:18 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:38:59.543 13:01:18 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:38:59.543 13:01:18 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:38:59.543 13:01:18 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:38:59.543 13:01:18 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:38:59.543 13:01:18 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:38:59.543 13:01:18 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:38:59.543 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:59.543 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:38:59.543 00:38:59.543 --- 10.0.0.2 ping statistics --- 00:38:59.543 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:59.543 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:38:59.543 13:01:18 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:38:59.543 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:38:59.543 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:38:59.543 00:38:59.543 --- 10.0.0.3 ping statistics --- 00:38:59.543 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:59.543 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:38:59.543 13:01:18 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:38:59.543 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:59.543 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:38:59.543 00:38:59.543 --- 10.0.0.1 ping statistics --- 00:38:59.543 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:59.543 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:38:59.543 13:01:18 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:59.543 13:01:18 -- nvmf/common.sh@421 -- # return 0 00:38:59.543 13:01:18 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:38:59.543 13:01:18 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:59.543 13:01:18 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:38:59.543 13:01:18 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:38:59.543 13:01:18 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:59.543 13:01:18 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:38:59.543 13:01:18 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:38:59.543 13:01:18 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:38:59.543 13:01:18 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:38:59.543 13:01:18 -- common/autotest_common.sh@712 -- # xtrace_disable 00:38:59.543 13:01:18 -- common/autotest_common.sh@10 -- # set +x 00:38:59.543 13:01:18 -- nvmf/common.sh@469 -- # nvmfpid=81594 00:38:59.543 13:01:18 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:38:59.543 13:01:18 -- nvmf/common.sh@470 -- # waitforlisten 81594 00:38:59.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:59.543 13:01:18 -- common/autotest_common.sh@819 -- # '[' -z 81594 ']' 00:38:59.543 13:01:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:59.543 13:01:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:38:59.543 13:01:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:59.543 13:01:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:38:59.543 13:01:18 -- common/autotest_common.sh@10 -- # set +x 00:38:59.543 [2024-07-22 13:01:18.917244] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:38:59.543 [2024-07-22 13:01:18.917498] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:59.802 [2024-07-22 13:01:19.054983] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:59.802 [2024-07-22 13:01:19.113589] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:38:59.802 [2024-07-22 13:01:19.114035] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:59.802 [2024-07-22 13:01:19.114088] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:59.802 [2024-07-22 13:01:19.114102] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:59.802 [2024-07-22 13:01:19.114135] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:39:00.738 13:01:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:39:00.738 13:01:19 -- common/autotest_common.sh@852 -- # return 0 00:39:00.738 13:01:19 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:39:00.738 13:01:19 -- common/autotest_common.sh@718 -- # xtrace_disable 00:39:00.738 13:01:19 -- common/autotest_common.sh@10 -- # set +x 00:39:00.738 13:01:19 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:00.738 13:01:19 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:00.738 13:01:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:39:00.738 13:01:19 -- common/autotest_common.sh@10 -- # set +x 00:39:00.738 [2024-07-22 13:01:19.953890] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:00.738 13:01:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:39:00.738 13:01:19 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:39:00.738 13:01:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:39:00.738 13:01:19 -- common/autotest_common.sh@10 -- # set +x 00:39:00.738 13:01:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:39:00.738 13:01:19 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:00.738 13:01:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:39:00.738 13:01:19 -- common/autotest_common.sh@10 -- # set +x 00:39:00.738 [2024-07-22 13:01:19.969982] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:00.738 13:01:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:39:00.738 13:01:19 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:39:00.738 13:01:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:39:00.738 13:01:19 -- common/autotest_common.sh@10 -- # set +x 00:39:00.738 NULL1 00:39:00.738 13:01:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:39:00.738 13:01:19 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:39:00.738 13:01:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:39:00.738 13:01:19 -- common/autotest_common.sh@10 -- # set +x 00:39:00.738 13:01:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:39:00.738 13:01:19 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:39:00.738 13:01:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:39:00.738 13:01:19 -- common/autotest_common.sh@10 -- # set +x 00:39:00.738 13:01:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:39:00.738 13:01:19 -- target/fused_ordering.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:39:00.738 [2024-07-22 13:01:20.018100] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:39:00.738 [2024-07-22 13:01:20.018162] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81650 ] 00:39:01.306 Attached to nqn.2016-06.io.spdk:cnode1 00:39:01.306 Namespace ID: 1 size: 1GB 00:39:01.306 fused_ordering(0) 00:39:01.306 fused_ordering(1) 00:39:01.306 fused_ordering(2) 00:39:01.306 fused_ordering(3) 00:39:01.306 fused_ordering(4) 00:39:01.306 fused_ordering(5) 00:39:01.306 fused_ordering(6) 00:39:01.306 fused_ordering(7) 00:39:01.306 fused_ordering(8) 00:39:01.306 fused_ordering(9) 00:39:01.306 fused_ordering(10) 00:39:01.306 fused_ordering(11) 00:39:01.306 fused_ordering(12) 00:39:01.306 fused_ordering(13) 00:39:01.306 fused_ordering(14) 00:39:01.306 fused_ordering(15) 00:39:01.306 fused_ordering(16) 00:39:01.306 fused_ordering(17) 00:39:01.306 fused_ordering(18) 00:39:01.306 fused_ordering(19) 00:39:01.306 fused_ordering(20) 00:39:01.306 fused_ordering(21) 00:39:01.306 fused_ordering(22) 00:39:01.306 fused_ordering(23) 00:39:01.306 fused_ordering(24) 00:39:01.306 fused_ordering(25) 00:39:01.306 fused_ordering(26) 00:39:01.306 fused_ordering(27) 00:39:01.306 fused_ordering(28) 00:39:01.306 fused_ordering(29) 00:39:01.306 fused_ordering(30) 00:39:01.306 fused_ordering(31) 00:39:01.306 fused_ordering(32) 00:39:01.306 fused_ordering(33) 00:39:01.306 fused_ordering(34) 00:39:01.306 fused_ordering(35) 00:39:01.306 fused_ordering(36) 00:39:01.306 fused_ordering(37) 00:39:01.306 fused_ordering(38) 00:39:01.306 fused_ordering(39) 00:39:01.306 fused_ordering(40) 00:39:01.306 fused_ordering(41) 00:39:01.306 fused_ordering(42) 00:39:01.306 fused_ordering(43) 00:39:01.306 fused_ordering(44) 00:39:01.306 fused_ordering(45) 00:39:01.306 fused_ordering(46) 00:39:01.306 fused_ordering(47) 00:39:01.306 fused_ordering(48) 00:39:01.306 fused_ordering(49) 00:39:01.306 fused_ordering(50) 00:39:01.306 fused_ordering(51) 00:39:01.306 fused_ordering(52) 00:39:01.306 fused_ordering(53) 00:39:01.306 fused_ordering(54) 00:39:01.306 fused_ordering(55) 00:39:01.306 fused_ordering(56) 00:39:01.306 fused_ordering(57) 00:39:01.306 fused_ordering(58) 00:39:01.306 fused_ordering(59) 00:39:01.306 fused_ordering(60) 00:39:01.306 fused_ordering(61) 00:39:01.306 fused_ordering(62) 00:39:01.306 fused_ordering(63) 00:39:01.306 fused_ordering(64) 00:39:01.306 fused_ordering(65) 00:39:01.306 fused_ordering(66) 00:39:01.306 fused_ordering(67) 00:39:01.306 fused_ordering(68) 00:39:01.306 fused_ordering(69) 00:39:01.306 fused_ordering(70) 00:39:01.306 fused_ordering(71) 00:39:01.306 fused_ordering(72) 00:39:01.306 fused_ordering(73) 00:39:01.306 fused_ordering(74) 00:39:01.306 fused_ordering(75) 00:39:01.306 fused_ordering(76) 00:39:01.306 fused_ordering(77) 00:39:01.306 fused_ordering(78) 00:39:01.306 fused_ordering(79) 00:39:01.306 fused_ordering(80) 00:39:01.306 fused_ordering(81) 00:39:01.306 fused_ordering(82) 00:39:01.306 fused_ordering(83) 00:39:01.306 fused_ordering(84) 00:39:01.306 fused_ordering(85) 00:39:01.306 fused_ordering(86) 00:39:01.306 fused_ordering(87) 00:39:01.306 fused_ordering(88) 00:39:01.306 fused_ordering(89) 00:39:01.306 fused_ordering(90) 00:39:01.306 fused_ordering(91) 00:39:01.306 fused_ordering(92) 00:39:01.306 fused_ordering(93) 00:39:01.306 fused_ordering(94) 00:39:01.306 fused_ordering(95) 00:39:01.306 fused_ordering(96) 00:39:01.306 fused_ordering(97) 00:39:01.306 fused_ordering(98) 00:39:01.306 fused_ordering(99) 00:39:01.306 fused_ordering(100) 00:39:01.306 fused_ordering(101) 00:39:01.306 fused_ordering(102) 00:39:01.306 fused_ordering(103) 00:39:01.306 fused_ordering(104) 00:39:01.306 fused_ordering(105) 00:39:01.306 fused_ordering(106) 00:39:01.306 fused_ordering(107) 00:39:01.306 fused_ordering(108) 00:39:01.306 fused_ordering(109) 00:39:01.306 fused_ordering(110) 00:39:01.306 fused_ordering(111) 00:39:01.306 fused_ordering(112) 00:39:01.306 fused_ordering(113) 00:39:01.306 fused_ordering(114) 00:39:01.306 fused_ordering(115) 00:39:01.306 fused_ordering(116) 00:39:01.306 fused_ordering(117) 00:39:01.306 fused_ordering(118) 00:39:01.306 fused_ordering(119) 00:39:01.306 fused_ordering(120) 00:39:01.306 fused_ordering(121) 00:39:01.307 fused_ordering(122) 00:39:01.307 fused_ordering(123) 00:39:01.307 fused_ordering(124) 00:39:01.307 fused_ordering(125) 00:39:01.307 fused_ordering(126) 00:39:01.307 fused_ordering(127) 00:39:01.307 fused_ordering(128) 00:39:01.307 fused_ordering(129) 00:39:01.307 fused_ordering(130) 00:39:01.307 fused_ordering(131) 00:39:01.307 fused_ordering(132) 00:39:01.307 fused_ordering(133) 00:39:01.307 fused_ordering(134) 00:39:01.307 fused_ordering(135) 00:39:01.307 fused_ordering(136) 00:39:01.307 fused_ordering(137) 00:39:01.307 fused_ordering(138) 00:39:01.307 fused_ordering(139) 00:39:01.307 fused_ordering(140) 00:39:01.307 fused_ordering(141) 00:39:01.307 fused_ordering(142) 00:39:01.307 fused_ordering(143) 00:39:01.307 fused_ordering(144) 00:39:01.307 fused_ordering(145) 00:39:01.307 fused_ordering(146) 00:39:01.307 fused_ordering(147) 00:39:01.307 fused_ordering(148) 00:39:01.307 fused_ordering(149) 00:39:01.307 fused_ordering(150) 00:39:01.307 fused_ordering(151) 00:39:01.307 fused_ordering(152) 00:39:01.307 fused_ordering(153) 00:39:01.307 fused_ordering(154) 00:39:01.307 fused_ordering(155) 00:39:01.307 fused_ordering(156) 00:39:01.307 fused_ordering(157) 00:39:01.307 fused_ordering(158) 00:39:01.307 fused_ordering(159) 00:39:01.307 fused_ordering(160) 00:39:01.307 fused_ordering(161) 00:39:01.307 fused_ordering(162) 00:39:01.307 fused_ordering(163) 00:39:01.307 fused_ordering(164) 00:39:01.307 fused_ordering(165) 00:39:01.307 fused_ordering(166) 00:39:01.307 fused_ordering(167) 00:39:01.307 fused_ordering(168) 00:39:01.307 fused_ordering(169) 00:39:01.307 fused_ordering(170) 00:39:01.307 fused_ordering(171) 00:39:01.307 fused_ordering(172) 00:39:01.307 fused_ordering(173) 00:39:01.307 fused_ordering(174) 00:39:01.307 fused_ordering(175) 00:39:01.307 fused_ordering(176) 00:39:01.307 fused_ordering(177) 00:39:01.307 fused_ordering(178) 00:39:01.307 fused_ordering(179) 00:39:01.307 fused_ordering(180) 00:39:01.307 fused_ordering(181) 00:39:01.307 fused_ordering(182) 00:39:01.307 fused_ordering(183) 00:39:01.307 fused_ordering(184) 00:39:01.307 fused_ordering(185) 00:39:01.307 fused_ordering(186) 00:39:01.307 fused_ordering(187) 00:39:01.307 fused_ordering(188) 00:39:01.307 fused_ordering(189) 00:39:01.307 fused_ordering(190) 00:39:01.307 fused_ordering(191) 00:39:01.307 fused_ordering(192) 00:39:01.307 fused_ordering(193) 00:39:01.307 fused_ordering(194) 00:39:01.307 fused_ordering(195) 00:39:01.307 fused_ordering(196) 00:39:01.307 fused_ordering(197) 00:39:01.307 fused_ordering(198) 00:39:01.307 fused_ordering(199) 00:39:01.307 fused_ordering(200) 00:39:01.307 fused_ordering(201) 00:39:01.307 fused_ordering(202) 00:39:01.307 fused_ordering(203) 00:39:01.307 fused_ordering(204) 00:39:01.307 fused_ordering(205) 00:39:01.307 fused_ordering(206) 00:39:01.307 fused_ordering(207) 00:39:01.307 fused_ordering(208) 00:39:01.307 fused_ordering(209) 00:39:01.307 fused_ordering(210) 00:39:01.307 fused_ordering(211) 00:39:01.307 fused_ordering(212) 00:39:01.307 fused_ordering(213) 00:39:01.307 fused_ordering(214) 00:39:01.307 fused_ordering(215) 00:39:01.307 fused_ordering(216) 00:39:01.307 fused_ordering(217) 00:39:01.307 fused_ordering(218) 00:39:01.307 fused_ordering(219) 00:39:01.307 fused_ordering(220) 00:39:01.307 fused_ordering(221) 00:39:01.307 fused_ordering(222) 00:39:01.307 fused_ordering(223) 00:39:01.307 fused_ordering(224) 00:39:01.307 fused_ordering(225) 00:39:01.307 fused_ordering(226) 00:39:01.307 fused_ordering(227) 00:39:01.307 fused_ordering(228) 00:39:01.307 fused_ordering(229) 00:39:01.307 fused_ordering(230) 00:39:01.307 fused_ordering(231) 00:39:01.307 fused_ordering(232) 00:39:01.307 fused_ordering(233) 00:39:01.307 fused_ordering(234) 00:39:01.307 fused_ordering(235) 00:39:01.307 fused_ordering(236) 00:39:01.307 fused_ordering(237) 00:39:01.307 fused_ordering(238) 00:39:01.307 fused_ordering(239) 00:39:01.307 fused_ordering(240) 00:39:01.307 fused_ordering(241) 00:39:01.307 fused_ordering(242) 00:39:01.307 fused_ordering(243) 00:39:01.307 fused_ordering(244) 00:39:01.307 fused_ordering(245) 00:39:01.307 fused_ordering(246) 00:39:01.307 fused_ordering(247) 00:39:01.307 fused_ordering(248) 00:39:01.307 fused_ordering(249) 00:39:01.307 fused_ordering(250) 00:39:01.307 fused_ordering(251) 00:39:01.307 fused_ordering(252) 00:39:01.307 fused_ordering(253) 00:39:01.307 fused_ordering(254) 00:39:01.307 fused_ordering(255) 00:39:01.307 fused_ordering(256) 00:39:01.307 fused_ordering(257) 00:39:01.307 fused_ordering(258) 00:39:01.307 fused_ordering(259) 00:39:01.307 fused_ordering(260) 00:39:01.307 fused_ordering(261) 00:39:01.307 fused_ordering(262) 00:39:01.307 fused_ordering(263) 00:39:01.307 fused_ordering(264) 00:39:01.307 fused_ordering(265) 00:39:01.307 fused_ordering(266) 00:39:01.307 fused_ordering(267) 00:39:01.307 fused_ordering(268) 00:39:01.307 fused_ordering(269) 00:39:01.307 fused_ordering(270) 00:39:01.307 fused_ordering(271) 00:39:01.307 fused_ordering(272) 00:39:01.307 fused_ordering(273) 00:39:01.307 fused_ordering(274) 00:39:01.307 fused_ordering(275) 00:39:01.307 fused_ordering(276) 00:39:01.307 fused_ordering(277) 00:39:01.307 fused_ordering(278) 00:39:01.307 fused_ordering(279) 00:39:01.307 fused_ordering(280) 00:39:01.307 fused_ordering(281) 00:39:01.307 fused_ordering(282) 00:39:01.307 fused_ordering(283) 00:39:01.307 fused_ordering(284) 00:39:01.307 fused_ordering(285) 00:39:01.307 fused_ordering(286) 00:39:01.307 fused_ordering(287) 00:39:01.307 fused_ordering(288) 00:39:01.307 fused_ordering(289) 00:39:01.307 fused_ordering(290) 00:39:01.307 fused_ordering(291) 00:39:01.307 fused_ordering(292) 00:39:01.307 fused_ordering(293) 00:39:01.307 fused_ordering(294) 00:39:01.307 fused_ordering(295) 00:39:01.307 fused_ordering(296) 00:39:01.307 fused_ordering(297) 00:39:01.307 fused_ordering(298) 00:39:01.307 fused_ordering(299) 00:39:01.307 fused_ordering(300) 00:39:01.307 fused_ordering(301) 00:39:01.307 fused_ordering(302) 00:39:01.307 fused_ordering(303) 00:39:01.307 fused_ordering(304) 00:39:01.307 fused_ordering(305) 00:39:01.307 fused_ordering(306) 00:39:01.307 fused_ordering(307) 00:39:01.307 fused_ordering(308) 00:39:01.307 fused_ordering(309) 00:39:01.307 fused_ordering(310) 00:39:01.307 fused_ordering(311) 00:39:01.307 fused_ordering(312) 00:39:01.307 fused_ordering(313) 00:39:01.307 fused_ordering(314) 00:39:01.307 fused_ordering(315) 00:39:01.307 fused_ordering(316) 00:39:01.307 fused_ordering(317) 00:39:01.307 fused_ordering(318) 00:39:01.307 fused_ordering(319) 00:39:01.307 fused_ordering(320) 00:39:01.307 fused_ordering(321) 00:39:01.307 fused_ordering(322) 00:39:01.307 fused_ordering(323) 00:39:01.307 fused_ordering(324) 00:39:01.307 fused_ordering(325) 00:39:01.307 fused_ordering(326) 00:39:01.307 fused_ordering(327) 00:39:01.307 fused_ordering(328) 00:39:01.307 fused_ordering(329) 00:39:01.307 fused_ordering(330) 00:39:01.307 fused_ordering(331) 00:39:01.307 fused_ordering(332) 00:39:01.307 fused_ordering(333) 00:39:01.307 fused_ordering(334) 00:39:01.307 fused_ordering(335) 00:39:01.307 fused_ordering(336) 00:39:01.307 fused_ordering(337) 00:39:01.307 fused_ordering(338) 00:39:01.307 fused_ordering(339) 00:39:01.307 fused_ordering(340) 00:39:01.307 fused_ordering(341) 00:39:01.307 fused_ordering(342) 00:39:01.307 fused_ordering(343) 00:39:01.307 fused_ordering(344) 00:39:01.307 fused_ordering(345) 00:39:01.307 fused_ordering(346) 00:39:01.307 fused_ordering(347) 00:39:01.307 fused_ordering(348) 00:39:01.307 fused_ordering(349) 00:39:01.307 fused_ordering(350) 00:39:01.307 fused_ordering(351) 00:39:01.307 fused_ordering(352) 00:39:01.307 fused_ordering(353) 00:39:01.307 fused_ordering(354) 00:39:01.307 fused_ordering(355) 00:39:01.307 fused_ordering(356) 00:39:01.307 fused_ordering(357) 00:39:01.307 fused_ordering(358) 00:39:01.307 fused_ordering(359) 00:39:01.307 fused_ordering(360) 00:39:01.307 fused_ordering(361) 00:39:01.307 fused_ordering(362) 00:39:01.307 fused_ordering(363) 00:39:01.307 fused_ordering(364) 00:39:01.307 fused_ordering(365) 00:39:01.307 fused_ordering(366) 00:39:01.307 fused_ordering(367) 00:39:01.307 fused_ordering(368) 00:39:01.307 fused_ordering(369) 00:39:01.307 fused_ordering(370) 00:39:01.307 fused_ordering(371) 00:39:01.307 fused_ordering(372) 00:39:01.307 fused_ordering(373) 00:39:01.307 fused_ordering(374) 00:39:01.307 fused_ordering(375) 00:39:01.307 fused_ordering(376) 00:39:01.307 fused_ordering(377) 00:39:01.307 fused_ordering(378) 00:39:01.307 fused_ordering(379) 00:39:01.307 fused_ordering(380) 00:39:01.307 fused_ordering(381) 00:39:01.307 fused_ordering(382) 00:39:01.307 fused_ordering(383) 00:39:01.307 fused_ordering(384) 00:39:01.307 fused_ordering(385) 00:39:01.307 fused_ordering(386) 00:39:01.307 fused_ordering(387) 00:39:01.307 fused_ordering(388) 00:39:01.307 fused_ordering(389) 00:39:01.307 fused_ordering(390) 00:39:01.307 fused_ordering(391) 00:39:01.307 fused_ordering(392) 00:39:01.307 fused_ordering(393) 00:39:01.307 fused_ordering(394) 00:39:01.307 fused_ordering(395) 00:39:01.307 fused_ordering(396) 00:39:01.307 fused_ordering(397) 00:39:01.307 fused_ordering(398) 00:39:01.307 fused_ordering(399) 00:39:01.307 fused_ordering(400) 00:39:01.307 fused_ordering(401) 00:39:01.308 fused_ordering(402) 00:39:01.308 fused_ordering(403) 00:39:01.308 fused_ordering(404) 00:39:01.308 fused_ordering(405) 00:39:01.308 fused_ordering(406) 00:39:01.308 fused_ordering(407) 00:39:01.308 fused_ordering(408) 00:39:01.308 fused_ordering(409) 00:39:01.308 fused_ordering(410) 00:39:01.566 fused_ordering(411) 00:39:01.566 fused_ordering(412) 00:39:01.566 fused_ordering(413) 00:39:01.566 fused_ordering(414) 00:39:01.566 fused_ordering(415) 00:39:01.566 fused_ordering(416) 00:39:01.566 fused_ordering(417) 00:39:01.566 fused_ordering(418) 00:39:01.566 fused_ordering(419) 00:39:01.566 fused_ordering(420) 00:39:01.566 fused_ordering(421) 00:39:01.566 fused_ordering(422) 00:39:01.566 fused_ordering(423) 00:39:01.566 fused_ordering(424) 00:39:01.566 fused_ordering(425) 00:39:01.566 fused_ordering(426) 00:39:01.566 fused_ordering(427) 00:39:01.566 fused_ordering(428) 00:39:01.566 fused_ordering(429) 00:39:01.566 fused_ordering(430) 00:39:01.566 fused_ordering(431) 00:39:01.566 fused_ordering(432) 00:39:01.566 fused_ordering(433) 00:39:01.566 fused_ordering(434) 00:39:01.566 fused_ordering(435) 00:39:01.566 fused_ordering(436) 00:39:01.566 fused_ordering(437) 00:39:01.566 fused_ordering(438) 00:39:01.566 fused_ordering(439) 00:39:01.566 fused_ordering(440) 00:39:01.566 fused_ordering(441) 00:39:01.566 fused_ordering(442) 00:39:01.566 fused_ordering(443) 00:39:01.566 fused_ordering(444) 00:39:01.566 fused_ordering(445) 00:39:01.566 fused_ordering(446) 00:39:01.566 fused_ordering(447) 00:39:01.566 fused_ordering(448) 00:39:01.566 fused_ordering(449) 00:39:01.566 fused_ordering(450) 00:39:01.566 fused_ordering(451) 00:39:01.567 fused_ordering(452) 00:39:01.567 fused_ordering(453) 00:39:01.567 fused_ordering(454) 00:39:01.567 fused_ordering(455) 00:39:01.567 fused_ordering(456) 00:39:01.567 fused_ordering(457) 00:39:01.567 fused_ordering(458) 00:39:01.567 fused_ordering(459) 00:39:01.567 fused_ordering(460) 00:39:01.567 fused_ordering(461) 00:39:01.567 fused_ordering(462) 00:39:01.567 fused_ordering(463) 00:39:01.567 fused_ordering(464) 00:39:01.567 fused_ordering(465) 00:39:01.567 fused_ordering(466) 00:39:01.567 fused_ordering(467) 00:39:01.567 fused_ordering(468) 00:39:01.567 fused_ordering(469) 00:39:01.567 fused_ordering(470) 00:39:01.567 fused_ordering(471) 00:39:01.567 fused_ordering(472) 00:39:01.567 fused_ordering(473) 00:39:01.567 fused_ordering(474) 00:39:01.567 fused_ordering(475) 00:39:01.567 fused_ordering(476) 00:39:01.567 fused_ordering(477) 00:39:01.567 fused_ordering(478) 00:39:01.567 fused_ordering(479) 00:39:01.567 fused_ordering(480) 00:39:01.567 fused_ordering(481) 00:39:01.567 fused_ordering(482) 00:39:01.567 fused_ordering(483) 00:39:01.567 fused_ordering(484) 00:39:01.567 fused_ordering(485) 00:39:01.567 fused_ordering(486) 00:39:01.567 fused_ordering(487) 00:39:01.567 fused_ordering(488) 00:39:01.567 fused_ordering(489) 00:39:01.567 fused_ordering(490) 00:39:01.567 fused_ordering(491) 00:39:01.567 fused_ordering(492) 00:39:01.567 fused_ordering(493) 00:39:01.567 fused_ordering(494) 00:39:01.567 fused_ordering(495) 00:39:01.567 fused_ordering(496) 00:39:01.567 fused_ordering(497) 00:39:01.567 fused_ordering(498) 00:39:01.567 fused_ordering(499) 00:39:01.567 fused_ordering(500) 00:39:01.567 fused_ordering(501) 00:39:01.567 fused_ordering(502) 00:39:01.567 fused_ordering(503) 00:39:01.567 fused_ordering(504) 00:39:01.567 fused_ordering(505) 00:39:01.567 fused_ordering(506) 00:39:01.567 fused_ordering(507) 00:39:01.567 fused_ordering(508) 00:39:01.567 fused_ordering(509) 00:39:01.567 fused_ordering(510) 00:39:01.567 fused_ordering(511) 00:39:01.567 fused_ordering(512) 00:39:01.567 fused_ordering(513) 00:39:01.567 fused_ordering(514) 00:39:01.567 fused_ordering(515) 00:39:01.567 fused_ordering(516) 00:39:01.567 fused_ordering(517) 00:39:01.567 fused_ordering(518) 00:39:01.567 fused_ordering(519) 00:39:01.567 fused_ordering(520) 00:39:01.567 fused_ordering(521) 00:39:01.567 fused_ordering(522) 00:39:01.567 fused_ordering(523) 00:39:01.567 fused_ordering(524) 00:39:01.567 fused_ordering(525) 00:39:01.567 fused_ordering(526) 00:39:01.567 fused_ordering(527) 00:39:01.567 fused_ordering(528) 00:39:01.567 fused_ordering(529) 00:39:01.567 fused_ordering(530) 00:39:01.567 fused_ordering(531) 00:39:01.567 fused_ordering(532) 00:39:01.567 fused_ordering(533) 00:39:01.567 fused_ordering(534) 00:39:01.567 fused_ordering(535) 00:39:01.567 fused_ordering(536) 00:39:01.567 fused_ordering(537) 00:39:01.567 fused_ordering(538) 00:39:01.567 fused_ordering(539) 00:39:01.567 fused_ordering(540) 00:39:01.567 fused_ordering(541) 00:39:01.567 fused_ordering(542) 00:39:01.567 fused_ordering(543) 00:39:01.567 fused_ordering(544) 00:39:01.567 fused_ordering(545) 00:39:01.567 fused_ordering(546) 00:39:01.567 fused_ordering(547) 00:39:01.567 fused_ordering(548) 00:39:01.567 fused_ordering(549) 00:39:01.567 fused_ordering(550) 00:39:01.567 fused_ordering(551) 00:39:01.567 fused_ordering(552) 00:39:01.567 fused_ordering(553) 00:39:01.567 fused_ordering(554) 00:39:01.567 fused_ordering(555) 00:39:01.567 fused_ordering(556) 00:39:01.567 fused_ordering(557) 00:39:01.567 fused_ordering(558) 00:39:01.567 fused_ordering(559) 00:39:01.567 fused_ordering(560) 00:39:01.567 fused_ordering(561) 00:39:01.567 fused_ordering(562) 00:39:01.567 fused_ordering(563) 00:39:01.567 fused_ordering(564) 00:39:01.567 fused_ordering(565) 00:39:01.567 fused_ordering(566) 00:39:01.567 fused_ordering(567) 00:39:01.567 fused_ordering(568) 00:39:01.567 fused_ordering(569) 00:39:01.567 fused_ordering(570) 00:39:01.567 fused_ordering(571) 00:39:01.567 fused_ordering(572) 00:39:01.567 fused_ordering(573) 00:39:01.567 fused_ordering(574) 00:39:01.567 fused_ordering(575) 00:39:01.567 fused_ordering(576) 00:39:01.567 fused_ordering(577) 00:39:01.567 fused_ordering(578) 00:39:01.567 fused_ordering(579) 00:39:01.567 fused_ordering(580) 00:39:01.567 fused_ordering(581) 00:39:01.567 fused_ordering(582) 00:39:01.567 fused_ordering(583) 00:39:01.567 fused_ordering(584) 00:39:01.567 fused_ordering(585) 00:39:01.567 fused_ordering(586) 00:39:01.567 fused_ordering(587) 00:39:01.567 fused_ordering(588) 00:39:01.567 fused_ordering(589) 00:39:01.567 fused_ordering(590) 00:39:01.567 fused_ordering(591) 00:39:01.567 fused_ordering(592) 00:39:01.567 fused_ordering(593) 00:39:01.567 fused_ordering(594) 00:39:01.567 fused_ordering(595) 00:39:01.567 fused_ordering(596) 00:39:01.567 fused_ordering(597) 00:39:01.567 fused_ordering(598) 00:39:01.567 fused_ordering(599) 00:39:01.567 fused_ordering(600) 00:39:01.567 fused_ordering(601) 00:39:01.567 fused_ordering(602) 00:39:01.567 fused_ordering(603) 00:39:01.567 fused_ordering(604) 00:39:01.567 fused_ordering(605) 00:39:01.567 fused_ordering(606) 00:39:01.567 fused_ordering(607) 00:39:01.567 fused_ordering(608) 00:39:01.567 fused_ordering(609) 00:39:01.567 fused_ordering(610) 00:39:01.567 fused_ordering(611) 00:39:01.567 fused_ordering(612) 00:39:01.567 fused_ordering(613) 00:39:01.567 fused_ordering(614) 00:39:01.567 fused_ordering(615) 00:39:02.134 fused_ordering(616) 00:39:02.134 fused_ordering(617) 00:39:02.134 fused_ordering(618) 00:39:02.134 fused_ordering(619) 00:39:02.134 fused_ordering(620) 00:39:02.134 fused_ordering(621) 00:39:02.134 fused_ordering(622) 00:39:02.134 fused_ordering(623) 00:39:02.134 fused_ordering(624) 00:39:02.134 fused_ordering(625) 00:39:02.134 fused_ordering(626) 00:39:02.134 fused_ordering(627) 00:39:02.134 fused_ordering(628) 00:39:02.134 fused_ordering(629) 00:39:02.134 fused_ordering(630) 00:39:02.134 fused_ordering(631) 00:39:02.134 fused_ordering(632) 00:39:02.134 fused_ordering(633) 00:39:02.134 fused_ordering(634) 00:39:02.134 fused_ordering(635) 00:39:02.134 fused_ordering(636) 00:39:02.134 fused_ordering(637) 00:39:02.134 fused_ordering(638) 00:39:02.134 fused_ordering(639) 00:39:02.134 fused_ordering(640) 00:39:02.134 fused_ordering(641) 00:39:02.134 fused_ordering(642) 00:39:02.134 fused_ordering(643) 00:39:02.134 fused_ordering(644) 00:39:02.134 fused_ordering(645) 00:39:02.134 fused_ordering(646) 00:39:02.134 fused_ordering(647) 00:39:02.134 fused_ordering(648) 00:39:02.134 fused_ordering(649) 00:39:02.134 fused_ordering(650) 00:39:02.134 fused_ordering(651) 00:39:02.134 fused_ordering(652) 00:39:02.134 fused_ordering(653) 00:39:02.134 fused_ordering(654) 00:39:02.134 fused_ordering(655) 00:39:02.134 fused_ordering(656) 00:39:02.134 fused_ordering(657) 00:39:02.134 fused_ordering(658) 00:39:02.135 fused_ordering(659) 00:39:02.135 fused_ordering(660) 00:39:02.135 fused_ordering(661) 00:39:02.135 fused_ordering(662) 00:39:02.135 fused_ordering(663) 00:39:02.135 fused_ordering(664) 00:39:02.135 fused_ordering(665) 00:39:02.135 fused_ordering(666) 00:39:02.135 fused_ordering(667) 00:39:02.135 fused_ordering(668) 00:39:02.135 fused_ordering(669) 00:39:02.135 fused_ordering(670) 00:39:02.135 fused_ordering(671) 00:39:02.135 fused_ordering(672) 00:39:02.135 fused_ordering(673) 00:39:02.135 fused_ordering(674) 00:39:02.135 fused_ordering(675) 00:39:02.135 fused_ordering(676) 00:39:02.135 fused_ordering(677) 00:39:02.135 fused_ordering(678) 00:39:02.135 fused_ordering(679) 00:39:02.135 fused_ordering(680) 00:39:02.135 fused_ordering(681) 00:39:02.135 fused_ordering(682) 00:39:02.135 fused_ordering(683) 00:39:02.135 fused_ordering(684) 00:39:02.135 fused_ordering(685) 00:39:02.135 fused_ordering(686) 00:39:02.135 fused_ordering(687) 00:39:02.135 fused_ordering(688) 00:39:02.135 fused_ordering(689) 00:39:02.135 fused_ordering(690) 00:39:02.135 fused_ordering(691) 00:39:02.135 fused_ordering(692) 00:39:02.135 fused_ordering(693) 00:39:02.135 fused_ordering(694) 00:39:02.135 fused_ordering(695) 00:39:02.135 fused_ordering(696) 00:39:02.135 fused_ordering(697) 00:39:02.135 fused_ordering(698) 00:39:02.135 fused_ordering(699) 00:39:02.135 fused_ordering(700) 00:39:02.135 fused_ordering(701) 00:39:02.135 fused_ordering(702) 00:39:02.135 fused_ordering(703) 00:39:02.135 fused_ordering(704) 00:39:02.135 fused_ordering(705) 00:39:02.135 fused_ordering(706) 00:39:02.135 fused_ordering(707) 00:39:02.135 fused_ordering(708) 00:39:02.135 fused_ordering(709) 00:39:02.135 fused_ordering(710) 00:39:02.135 fused_ordering(711) 00:39:02.135 fused_ordering(712) 00:39:02.135 fused_ordering(713) 00:39:02.135 fused_ordering(714) 00:39:02.135 fused_ordering(715) 00:39:02.135 fused_ordering(716) 00:39:02.135 fused_ordering(717) 00:39:02.135 fused_ordering(718) 00:39:02.135 fused_ordering(719) 00:39:02.135 fused_ordering(720) 00:39:02.135 fused_ordering(721) 00:39:02.135 fused_ordering(722) 00:39:02.135 fused_ordering(723) 00:39:02.135 fused_ordering(724) 00:39:02.135 fused_ordering(725) 00:39:02.135 fused_ordering(726) 00:39:02.135 fused_ordering(727) 00:39:02.135 fused_ordering(728) 00:39:02.135 fused_ordering(729) 00:39:02.135 fused_ordering(730) 00:39:02.135 fused_ordering(731) 00:39:02.135 fused_ordering(732) 00:39:02.135 fused_ordering(733) 00:39:02.135 fused_ordering(734) 00:39:02.135 fused_ordering(735) 00:39:02.135 fused_ordering(736) 00:39:02.135 fused_ordering(737) 00:39:02.135 fused_ordering(738) 00:39:02.135 fused_ordering(739) 00:39:02.135 fused_ordering(740) 00:39:02.135 fused_ordering(741) 00:39:02.135 fused_ordering(742) 00:39:02.135 fused_ordering(743) 00:39:02.135 fused_ordering(744) 00:39:02.135 fused_ordering(745) 00:39:02.135 fused_ordering(746) 00:39:02.135 fused_ordering(747) 00:39:02.135 fused_ordering(748) 00:39:02.135 fused_ordering(749) 00:39:02.135 fused_ordering(750) 00:39:02.135 fused_ordering(751) 00:39:02.135 fused_ordering(752) 00:39:02.135 fused_ordering(753) 00:39:02.135 fused_ordering(754) 00:39:02.135 fused_ordering(755) 00:39:02.135 fused_ordering(756) 00:39:02.135 fused_ordering(757) 00:39:02.135 fused_ordering(758) 00:39:02.135 fused_ordering(759) 00:39:02.135 fused_ordering(760) 00:39:02.135 fused_ordering(761) 00:39:02.135 fused_ordering(762) 00:39:02.135 fused_ordering(763) 00:39:02.135 fused_ordering(764) 00:39:02.135 fused_ordering(765) 00:39:02.135 fused_ordering(766) 00:39:02.135 fused_ordering(767) 00:39:02.135 fused_ordering(768) 00:39:02.135 fused_ordering(769) 00:39:02.135 fused_ordering(770) 00:39:02.135 fused_ordering(771) 00:39:02.135 fused_ordering(772) 00:39:02.135 fused_ordering(773) 00:39:02.135 fused_ordering(774) 00:39:02.135 fused_ordering(775) 00:39:02.135 fused_ordering(776) 00:39:02.135 fused_ordering(777) 00:39:02.135 fused_ordering(778) 00:39:02.135 fused_ordering(779) 00:39:02.135 fused_ordering(780) 00:39:02.135 fused_ordering(781) 00:39:02.135 fused_ordering(782) 00:39:02.135 fused_ordering(783) 00:39:02.135 fused_ordering(784) 00:39:02.135 fused_ordering(785) 00:39:02.135 fused_ordering(786) 00:39:02.135 fused_ordering(787) 00:39:02.135 fused_ordering(788) 00:39:02.135 fused_ordering(789) 00:39:02.135 fused_ordering(790) 00:39:02.135 fused_ordering(791) 00:39:02.135 fused_ordering(792) 00:39:02.135 fused_ordering(793) 00:39:02.135 fused_ordering(794) 00:39:02.135 fused_ordering(795) 00:39:02.135 fused_ordering(796) 00:39:02.135 fused_ordering(797) 00:39:02.135 fused_ordering(798) 00:39:02.135 fused_ordering(799) 00:39:02.135 fused_ordering(800) 00:39:02.135 fused_ordering(801) 00:39:02.135 fused_ordering(802) 00:39:02.135 fused_ordering(803) 00:39:02.135 fused_ordering(804) 00:39:02.135 fused_ordering(805) 00:39:02.135 fused_ordering(806) 00:39:02.135 fused_ordering(807) 00:39:02.135 fused_ordering(808) 00:39:02.135 fused_ordering(809) 00:39:02.135 fused_ordering(810) 00:39:02.135 fused_ordering(811) 00:39:02.135 fused_ordering(812) 00:39:02.135 fused_ordering(813) 00:39:02.135 fused_ordering(814) 00:39:02.135 fused_ordering(815) 00:39:02.135 fused_ordering(816) 00:39:02.135 fused_ordering(817) 00:39:02.135 fused_ordering(818) 00:39:02.135 fused_ordering(819) 00:39:02.135 fused_ordering(820) 00:39:02.703 fused_ordering(821) 00:39:02.703 fused_ordering(822) 00:39:02.703 fused_ordering(823) 00:39:02.703 fused_ordering(824) 00:39:02.703 fused_ordering(825) 00:39:02.703 fused_ordering(826) 00:39:02.703 fused_ordering(827) 00:39:02.703 fused_ordering(828) 00:39:02.703 fused_ordering(829) 00:39:02.703 fused_ordering(830) 00:39:02.703 fused_ordering(831) 00:39:02.703 fused_ordering(832) 00:39:02.703 fused_ordering(833) 00:39:02.703 fused_ordering(834) 00:39:02.703 fused_ordering(835) 00:39:02.703 fused_ordering(836) 00:39:02.703 fused_ordering(837) 00:39:02.703 fused_ordering(838) 00:39:02.703 fused_ordering(839) 00:39:02.703 fused_ordering(840) 00:39:02.703 fused_ordering(841) 00:39:02.703 fused_ordering(842) 00:39:02.703 fused_ordering(843) 00:39:02.703 fused_ordering(844) 00:39:02.703 fused_ordering(845) 00:39:02.703 fused_ordering(846) 00:39:02.703 fused_ordering(847) 00:39:02.703 fused_ordering(848) 00:39:02.703 fused_ordering(849) 00:39:02.703 fused_ordering(850) 00:39:02.703 fused_ordering(851) 00:39:02.703 fused_ordering(852) 00:39:02.703 fused_ordering(853) 00:39:02.703 fused_ordering(854) 00:39:02.703 fused_ordering(855) 00:39:02.703 fused_ordering(856) 00:39:02.703 fused_ordering(857) 00:39:02.703 fused_ordering(858) 00:39:02.703 fused_ordering(859) 00:39:02.703 fused_ordering(860) 00:39:02.703 fused_ordering(861) 00:39:02.703 fused_ordering(862) 00:39:02.703 fused_ordering(863) 00:39:02.703 fused_ordering(864) 00:39:02.703 fused_ordering(865) 00:39:02.703 fused_ordering(866) 00:39:02.703 fused_ordering(867) 00:39:02.703 fused_ordering(868) 00:39:02.703 fused_ordering(869) 00:39:02.703 fused_ordering(870) 00:39:02.703 fused_ordering(871) 00:39:02.703 fused_ordering(872) 00:39:02.703 fused_ordering(873) 00:39:02.703 fused_ordering(874) 00:39:02.703 fused_ordering(875) 00:39:02.703 fused_ordering(876) 00:39:02.703 fused_ordering(877) 00:39:02.703 fused_ordering(878) 00:39:02.703 fused_ordering(879) 00:39:02.703 fused_ordering(880) 00:39:02.703 fused_ordering(881) 00:39:02.703 fused_ordering(882) 00:39:02.703 fused_ordering(883) 00:39:02.703 fused_ordering(884) 00:39:02.703 fused_ordering(885) 00:39:02.703 fused_ordering(886) 00:39:02.703 fused_ordering(887) 00:39:02.703 fused_ordering(888) 00:39:02.703 fused_ordering(889) 00:39:02.703 fused_ordering(890) 00:39:02.703 fused_ordering(891) 00:39:02.703 fused_ordering(892) 00:39:02.703 fused_ordering(893) 00:39:02.703 fused_ordering(894) 00:39:02.703 fused_ordering(895) 00:39:02.703 fused_ordering(896) 00:39:02.703 fused_ordering(897) 00:39:02.703 fused_ordering(898) 00:39:02.703 fused_ordering(899) 00:39:02.703 fused_ordering(900) 00:39:02.703 fused_ordering(901) 00:39:02.703 fused_ordering(902) 00:39:02.703 fused_ordering(903) 00:39:02.703 fused_ordering(904) 00:39:02.703 fused_ordering(905) 00:39:02.703 fused_ordering(906) 00:39:02.703 fused_ordering(907) 00:39:02.703 fused_ordering(908) 00:39:02.703 fused_ordering(909) 00:39:02.703 fused_ordering(910) 00:39:02.703 fused_ordering(911) 00:39:02.703 fused_ordering(912) 00:39:02.703 fused_ordering(913) 00:39:02.703 fused_ordering(914) 00:39:02.703 fused_ordering(915) 00:39:02.703 fused_ordering(916) 00:39:02.703 fused_ordering(917) 00:39:02.703 fused_ordering(918) 00:39:02.703 fused_ordering(919) 00:39:02.703 fused_ordering(920) 00:39:02.703 fused_ordering(921) 00:39:02.703 fused_ordering(922) 00:39:02.703 fused_ordering(923) 00:39:02.703 fused_ordering(924) 00:39:02.703 fused_ordering(925) 00:39:02.703 fused_ordering(926) 00:39:02.703 fused_ordering(927) 00:39:02.703 fused_ordering(928) 00:39:02.703 fused_ordering(929) 00:39:02.703 fused_ordering(930) 00:39:02.703 fused_ordering(931) 00:39:02.703 fused_ordering(932) 00:39:02.703 fused_ordering(933) 00:39:02.703 fused_ordering(934) 00:39:02.703 fused_ordering(935) 00:39:02.703 fused_ordering(936) 00:39:02.703 fused_ordering(937) 00:39:02.703 fused_ordering(938) 00:39:02.703 fused_ordering(939) 00:39:02.703 fused_ordering(940) 00:39:02.703 fused_ordering(941) 00:39:02.703 fused_ordering(942) 00:39:02.703 fused_ordering(943) 00:39:02.703 fused_ordering(944) 00:39:02.703 fused_ordering(945) 00:39:02.703 fused_ordering(946) 00:39:02.703 fused_ordering(947) 00:39:02.703 fused_ordering(948) 00:39:02.703 fused_ordering(949) 00:39:02.703 fused_ordering(950) 00:39:02.703 fused_ordering(951) 00:39:02.703 fused_ordering(952) 00:39:02.703 fused_ordering(953) 00:39:02.703 fused_ordering(954) 00:39:02.703 fused_ordering(955) 00:39:02.703 fused_ordering(956) 00:39:02.703 fused_ordering(957) 00:39:02.703 fused_ordering(958) 00:39:02.703 fused_ordering(959) 00:39:02.703 fused_ordering(960) 00:39:02.703 fused_ordering(961) 00:39:02.703 fused_ordering(962) 00:39:02.703 fused_ordering(963) 00:39:02.703 fused_ordering(964) 00:39:02.703 fused_ordering(965) 00:39:02.703 fused_ordering(966) 00:39:02.703 fused_ordering(967) 00:39:02.703 fused_ordering(968) 00:39:02.703 fused_ordering(969) 00:39:02.703 fused_ordering(970) 00:39:02.703 fused_ordering(971) 00:39:02.703 fused_ordering(972) 00:39:02.703 fused_ordering(973) 00:39:02.703 fused_ordering(974) 00:39:02.703 fused_ordering(975) 00:39:02.703 fused_ordering(976) 00:39:02.703 fused_ordering(977) 00:39:02.703 fused_ordering(978) 00:39:02.703 fused_ordering(979) 00:39:02.703 fused_ordering(980) 00:39:02.703 fused_ordering(981) 00:39:02.703 fused_ordering(982) 00:39:02.703 fused_ordering(983) 00:39:02.703 fused_ordering(984) 00:39:02.703 fused_ordering(985) 00:39:02.703 fused_ordering(986) 00:39:02.703 fused_ordering(987) 00:39:02.703 fused_ordering(988) 00:39:02.703 fused_ordering(989) 00:39:02.703 fused_ordering(990) 00:39:02.703 fused_ordering(991) 00:39:02.703 fused_ordering(992) 00:39:02.703 fused_ordering(993) 00:39:02.703 fused_ordering(994) 00:39:02.703 fused_ordering(995) 00:39:02.703 fused_ordering(996) 00:39:02.703 fused_ordering(997) 00:39:02.703 fused_ordering(998) 00:39:02.703 fused_ordering(999) 00:39:02.703 fused_ordering(1000) 00:39:02.703 fused_ordering(1001) 00:39:02.703 fused_ordering(1002) 00:39:02.703 fused_ordering(1003) 00:39:02.703 fused_ordering(1004) 00:39:02.703 fused_ordering(1005) 00:39:02.703 fused_ordering(1006) 00:39:02.703 fused_ordering(1007) 00:39:02.703 fused_ordering(1008) 00:39:02.703 fused_ordering(1009) 00:39:02.704 fused_ordering(1010) 00:39:02.704 fused_ordering(1011) 00:39:02.704 fused_ordering(1012) 00:39:02.704 fused_ordering(1013) 00:39:02.704 fused_ordering(1014) 00:39:02.704 fused_ordering(1015) 00:39:02.704 fused_ordering(1016) 00:39:02.704 fused_ordering(1017) 00:39:02.704 fused_ordering(1018) 00:39:02.704 fused_ordering(1019) 00:39:02.704 fused_ordering(1020) 00:39:02.704 fused_ordering(1021) 00:39:02.704 fused_ordering(1022) 00:39:02.704 fused_ordering(1023) 00:39:02.704 13:01:21 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:39:02.704 13:01:21 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:39:02.704 13:01:21 -- nvmf/common.sh@476 -- # nvmfcleanup 00:39:02.704 13:01:21 -- nvmf/common.sh@116 -- # sync 00:39:02.704 13:01:21 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:39:02.704 13:01:21 -- nvmf/common.sh@119 -- # set +e 00:39:02.704 13:01:21 -- nvmf/common.sh@120 -- # for i in {1..20} 00:39:02.704 13:01:21 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:39:02.704 rmmod nvme_tcp 00:39:02.704 rmmod nvme_fabrics 00:39:02.704 rmmod nvme_keyring 00:39:02.704 13:01:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:39:02.704 13:01:21 -- nvmf/common.sh@123 -- # set -e 00:39:02.704 13:01:21 -- nvmf/common.sh@124 -- # return 0 00:39:02.704 13:01:21 -- nvmf/common.sh@477 -- # '[' -n 81594 ']' 00:39:02.704 13:01:21 -- nvmf/common.sh@478 -- # killprocess 81594 00:39:02.704 13:01:21 -- common/autotest_common.sh@926 -- # '[' -z 81594 ']' 00:39:02.704 13:01:21 -- common/autotest_common.sh@930 -- # kill -0 81594 00:39:02.704 13:01:21 -- common/autotest_common.sh@931 -- # uname 00:39:02.704 13:01:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:39:02.704 13:01:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 81594 00:39:02.704 13:01:21 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:39:02.704 13:01:21 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:39:02.704 13:01:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 81594' 00:39:02.704 killing process with pid 81594 00:39:02.704 13:01:21 -- common/autotest_common.sh@945 -- # kill 81594 00:39:02.704 13:01:21 -- common/autotest_common.sh@950 -- # wait 81594 00:39:02.963 13:01:22 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:39:02.963 13:01:22 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:39:02.963 13:01:22 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:39:02.963 13:01:22 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:39:02.963 13:01:22 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:39:02.963 13:01:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:02.963 13:01:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:39:02.963 13:01:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:02.963 13:01:22 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:39:02.963 00:39:02.963 real 0m3.839s 00:39:02.963 user 0m4.489s 00:39:02.963 sys 0m1.281s 00:39:02.963 13:01:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:02.963 ************************************ 00:39:02.963 13:01:22 -- common/autotest_common.sh@10 -- # set +x 00:39:02.963 END TEST nvmf_fused_ordering 00:39:02.963 ************************************ 00:39:02.963 13:01:22 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:39:02.963 13:01:22 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:39:02.963 13:01:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:39:02.963 13:01:22 -- common/autotest_common.sh@10 -- # set +x 00:39:02.963 ************************************ 00:39:02.963 START TEST nvmf_delete_subsystem 00:39:02.963 ************************************ 00:39:02.963 13:01:22 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:39:02.963 * Looking for test storage... 00:39:02.963 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:39:02.963 13:01:22 -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:39:02.963 13:01:22 -- nvmf/common.sh@7 -- # uname -s 00:39:02.963 13:01:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:02.963 13:01:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:02.963 13:01:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:02.963 13:01:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:02.963 13:01:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:02.963 13:01:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:02.963 13:01:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:02.963 13:01:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:02.963 13:01:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:02.963 13:01:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:02.963 13:01:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:864ae4a3-cd96-439b-8ef2-6dab4d992115 00:39:02.963 13:01:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=864ae4a3-cd96-439b-8ef2-6dab4d992115 00:39:02.963 13:01:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:02.963 13:01:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:02.963 13:01:22 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:39:02.963 13:01:22 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:39:02.963 13:01:22 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:02.963 13:01:22 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:02.963 13:01:22 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:02.963 13:01:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:02.963 13:01:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:02.963 13:01:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:02.963 13:01:22 -- paths/export.sh@5 -- # export PATH 00:39:02.963 13:01:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:02.963 13:01:22 -- nvmf/common.sh@46 -- # : 0 00:39:02.963 13:01:22 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:39:02.963 13:01:22 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:39:02.963 13:01:22 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:39:02.963 13:01:22 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:02.963 13:01:22 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:02.963 13:01:22 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:39:02.963 13:01:22 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:39:02.963 13:01:22 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:39:02.963 13:01:22 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:39:02.963 13:01:22 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:39:02.963 13:01:22 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:02.963 13:01:22 -- nvmf/common.sh@436 -- # prepare_net_devs 00:39:02.963 13:01:22 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:39:02.963 13:01:22 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:39:02.963 13:01:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:02.963 13:01:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:39:02.963 13:01:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:03.222 13:01:22 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:39:03.222 13:01:22 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:39:03.222 13:01:22 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:39:03.222 13:01:22 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:39:03.222 13:01:22 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:39:03.222 13:01:22 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:39:03.222 13:01:22 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:03.222 13:01:22 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:03.222 13:01:22 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:39:03.222 13:01:22 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:39:03.222 13:01:22 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:39:03.222 13:01:22 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:39:03.222 13:01:22 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:39:03.222 13:01:22 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:03.222 13:01:22 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:39:03.222 13:01:22 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:39:03.222 13:01:22 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:39:03.222 13:01:22 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:39:03.222 13:01:22 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:39:03.222 13:01:22 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:39:03.222 Cannot find device "nvmf_tgt_br" 00:39:03.222 13:01:22 -- nvmf/common.sh@154 -- # true 00:39:03.222 13:01:22 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:39:03.222 Cannot find device "nvmf_tgt_br2" 00:39:03.222 13:01:22 -- nvmf/common.sh@155 -- # true 00:39:03.223 13:01:22 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:39:03.223 13:01:22 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:39:03.223 Cannot find device "nvmf_tgt_br" 00:39:03.223 13:01:22 -- nvmf/common.sh@157 -- # true 00:39:03.223 13:01:22 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:39:03.223 Cannot find device "nvmf_tgt_br2" 00:39:03.223 13:01:22 -- nvmf/common.sh@158 -- # true 00:39:03.223 13:01:22 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:39:03.223 13:01:22 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:39:03.223 13:01:22 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:39:03.223 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:39:03.223 13:01:22 -- nvmf/common.sh@161 -- # true 00:39:03.223 13:01:22 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:39:03.223 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:39:03.223 13:01:22 -- nvmf/common.sh@162 -- # true 00:39:03.223 13:01:22 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:39:03.223 13:01:22 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:39:03.223 13:01:22 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:39:03.223 13:01:22 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:39:03.223 13:01:22 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:39:03.223 13:01:22 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:39:03.223 13:01:22 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:39:03.223 13:01:22 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:39:03.223 13:01:22 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:39:03.223 13:01:22 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:39:03.223 13:01:22 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:39:03.223 13:01:22 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:39:03.223 13:01:22 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:39:03.223 13:01:22 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:39:03.223 13:01:22 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:39:03.223 13:01:22 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:39:03.223 13:01:22 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:39:03.223 13:01:22 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:39:03.223 13:01:22 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:39:03.223 13:01:22 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:39:03.481 13:01:22 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:39:03.481 13:01:22 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:39:03.481 13:01:22 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:39:03.481 13:01:22 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:39:03.481 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:03.481 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:39:03.481 00:39:03.481 --- 10.0.0.2 ping statistics --- 00:39:03.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:03.481 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:39:03.481 13:01:22 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:39:03.481 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:39:03.481 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:39:03.481 00:39:03.481 --- 10.0.0.3 ping statistics --- 00:39:03.482 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:03.482 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:39:03.482 13:01:22 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:39:03.482 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:03.482 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:39:03.482 00:39:03.482 --- 10.0.0.1 ping statistics --- 00:39:03.482 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:03.482 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:39:03.482 13:01:22 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:03.482 13:01:22 -- nvmf/common.sh@421 -- # return 0 00:39:03.482 13:01:22 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:39:03.482 13:01:22 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:03.482 13:01:22 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:39:03.482 13:01:22 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:39:03.482 13:01:22 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:03.482 13:01:22 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:39:03.482 13:01:22 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:39:03.482 13:01:22 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:39:03.482 13:01:22 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:39:03.482 13:01:22 -- common/autotest_common.sh@712 -- # xtrace_disable 00:39:03.482 13:01:22 -- common/autotest_common.sh@10 -- # set +x 00:39:03.482 13:01:22 -- nvmf/common.sh@469 -- # nvmfpid=81853 00:39:03.482 13:01:22 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:39:03.482 13:01:22 -- nvmf/common.sh@470 -- # waitforlisten 81853 00:39:03.482 13:01:22 -- common/autotest_common.sh@819 -- # '[' -z 81853 ']' 00:39:03.482 13:01:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:03.482 13:01:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:39:03.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:03.482 13:01:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:03.482 13:01:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:39:03.482 13:01:22 -- common/autotest_common.sh@10 -- # set +x 00:39:03.482 [2024-07-22 13:01:22.775718] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:39:03.482 [2024-07-22 13:01:22.775803] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:03.744 [2024-07-22 13:01:22.918182] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:39:03.744 [2024-07-22 13:01:23.002703] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:39:03.744 [2024-07-22 13:01:23.002878] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:03.744 [2024-07-22 13:01:23.002895] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:03.744 [2024-07-22 13:01:23.002906] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:03.744 [2024-07-22 13:01:23.003048] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:03.744 [2024-07-22 13:01:23.003035] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:39:04.678 13:01:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:39:04.678 13:01:23 -- common/autotest_common.sh@852 -- # return 0 00:39:04.678 13:01:23 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:39:04.678 13:01:23 -- common/autotest_common.sh@718 -- # xtrace_disable 00:39:04.678 13:01:23 -- common/autotest_common.sh@10 -- # set +x 00:39:04.678 13:01:23 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:04.678 13:01:23 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:04.678 13:01:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:39:04.678 13:01:23 -- common/autotest_common.sh@10 -- # set +x 00:39:04.678 [2024-07-22 13:01:23.818225] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:04.678 13:01:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:39:04.678 13:01:23 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:39:04.678 13:01:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:39:04.678 13:01:23 -- common/autotest_common.sh@10 -- # set +x 00:39:04.678 13:01:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:39:04.678 13:01:23 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:04.678 13:01:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:39:04.678 13:01:23 -- common/autotest_common.sh@10 -- # set +x 00:39:04.678 [2024-07-22 13:01:23.834372] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:04.678 13:01:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:39:04.679 13:01:23 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:39:04.679 13:01:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:39:04.679 13:01:23 -- common/autotest_common.sh@10 -- # set +x 00:39:04.679 NULL1 00:39:04.679 13:01:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:39:04.679 13:01:23 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:39:04.679 13:01:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:39:04.679 13:01:23 -- common/autotest_common.sh@10 -- # set +x 00:39:04.679 Delay0 00:39:04.679 13:01:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:39:04.679 13:01:23 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:04.679 13:01:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:39:04.679 13:01:23 -- common/autotest_common.sh@10 -- # set +x 00:39:04.679 13:01:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:39:04.679 13:01:23 -- target/delete_subsystem.sh@28 -- # perf_pid=81904 00:39:04.679 13:01:23 -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:39:04.679 13:01:23 -- target/delete_subsystem.sh@30 -- # sleep 2 00:39:04.679 [2024-07-22 13:01:24.028814] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:39:06.582 13:01:25 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:06.582 13:01:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:39:06.582 13:01:25 -- common/autotest_common.sh@10 -- # set +x 00:39:06.841 Read completed with error (sct=0, sc=8) 00:39:06.841 starting I/O failed: -6 00:39:06.841 Read completed with error (sct=0, sc=8) 00:39:06.841 Write completed with error (sct=0, sc=8) 00:39:06.841 Read completed with error (sct=0, sc=8) 00:39:06.841 Read completed with error (sct=0, sc=8) 00:39:06.841 starting I/O failed: -6 00:39:06.841 Read completed with error (sct=0, sc=8) 00:39:06.841 Read completed with error (sct=0, sc=8) 00:39:06.841 Write completed with error (sct=0, sc=8) 00:39:06.841 Read completed with error (sct=0, sc=8) 00:39:06.841 starting I/O failed: -6 00:39:06.841 Read completed with error (sct=0, sc=8) 00:39:06.841 Read completed with error (sct=0, sc=8) 00:39:06.841 Read completed with error (sct=0, sc=8) 00:39:06.841 Read completed with error (sct=0, sc=8) 00:39:06.841 starting I/O failed: -6 00:39:06.841 Read completed with error (sct=0, sc=8) 00:39:06.841 Read completed with error (sct=0, sc=8) 00:39:06.841 Read completed with error (sct=0, sc=8) 00:39:06.841 Read completed with error (sct=0, sc=8) 00:39:06.841 starting I/O failed: -6 00:39:06.841 Read completed with error (sct=0, sc=8) 00:39:06.841 Read completed with error (sct=0, sc=8) 00:39:06.841 Read completed with error (sct=0, sc=8) 00:39:06.841 Read completed with error (sct=0, sc=8) 00:39:06.841 starting I/O failed: -6 00:39:06.841 Write completed with error (sct=0, sc=8) 00:39:06.841 Read completed with error (sct=0, sc=8) 00:39:06.841 Write completed with error (sct=0, sc=8) 00:39:06.841 Read completed with error (sct=0, sc=8) 00:39:06.841 starting I/O failed: -6 00:39:06.841 Write completed with error (sct=0, sc=8) 00:39:06.841 Write completed with error (sct=0, sc=8) 00:39:06.841 Write completed with error (sct=0, sc=8) 00:39:06.841 Read completed with error (sct=0, sc=8) 00:39:06.841 starting I/O failed: -6 00:39:06.841 Read completed with error (sct=0, sc=8) 00:39:06.841 Read completed with error (sct=0, sc=8) 00:39:06.841 Read completed with error (sct=0, sc=8) 00:39:06.841 Read completed with error (sct=0, sc=8) 00:39:06.841 starting I/O failed: -6 00:39:06.841 Write completed with error (sct=0, sc=8) 00:39:06.841 Read completed with error (sct=0, sc=8) 00:39:06.841 Read completed with error (sct=0, sc=8) 00:39:06.841 Read completed with error (sct=0, sc=8) 00:39:06.841 starting I/O failed: -6 00:39:06.841 Read completed with error (sct=0, sc=8) 00:39:06.841 Write completed with error (sct=0, sc=8) 00:39:06.841 Read completed with error (sct=0, sc=8) 00:39:06.841 Write completed with error (sct=0, sc=8) 00:39:06.841 starting I/O failed: -6 00:39:06.841 Write completed with error (sct=0, sc=8) 00:39:06.841 Read completed with error (sct=0, sc=8) 00:39:06.841 Read completed with error (sct=0, sc=8) 00:39:06.841 Write completed with error (sct=0, sc=8) 00:39:06.841 starting I/O failed: -6 00:39:06.841 Read completed with error (sct=0, sc=8) 00:39:06.841 [2024-07-22 13:01:26.063790] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b00600 is same with the state(5) to be set 00:39:06.841 Write completed with error (sct=0, sc=8) 00:39:06.841 Write completed with error (sct=0, sc=8) 00:39:06.841 Read completed with error (sct=0, sc=8) 00:39:06.841 Write completed with error (sct=0, sc=8) 00:39:06.841 Read completed with error (sct=0, sc=8) 00:39:06.841 Write completed with error (sct=0, sc=8) 00:39:06.841 Read completed with error (sct=0, sc=8) 00:39:06.841 Read completed with error (sct=0, sc=8) 00:39:06.841 Write completed with error (sct=0, sc=8) 00:39:06.841 Read completed with error (sct=0, sc=8) 00:39:06.841 Read completed with error (sct=0, sc=8) 00:39:06.841 Write completed with error (sct=0, sc=8) 00:39:06.841 Read completed with error (sct=0, sc=8) 00:39:06.841 Read completed with error (sct=0, sc=8) 00:39:06.841 Read completed with error (sct=0, sc=8) 00:39:06.841 Write completed with error (sct=0, sc=8) 00:39:06.841 Read completed with error (sct=0, sc=8) 00:39:06.841 Write completed with error (sct=0, sc=8) 00:39:06.841 Read completed with error (sct=0, sc=8) 00:39:06.841 Write completed with error (sct=0, sc=8) 00:39:06.841 Read completed with error (sct=0, sc=8) 00:39:06.841 Read completed with error (sct=0, sc=8) 00:39:06.841 Read completed with error (sct=0, sc=8) 00:39:06.841 Write completed with error (sct=0, sc=8) 00:39:06.841 Read completed with error (sct=0, sc=8) 00:39:06.841 Write completed with error (sct=0, sc=8) 00:39:06.841 Read completed with error (sct=0, sc=8) 00:39:06.841 Write completed with error (sct=0, sc=8) 00:39:06.841 Read completed with error (sct=0, sc=8) 00:39:06.841 Write completed with error (sct=0, sc=8) 00:39:06.841 Read completed with error (sct=0, sc=8) 00:39:06.841 Read completed with error (sct=0, sc=8) 00:39:06.841 Read completed with error (sct=0, sc=8) 00:39:06.841 Read completed with error (sct=0, sc=8) 00:39:06.841 Read completed with error (sct=0, sc=8) 00:39:06.841 Write completed with error (sct=0, sc=8) 00:39:06.841 Read completed with error (sct=0, sc=8) 00:39:06.841 Write completed with error (sct=0, sc=8) 00:39:06.841 Write completed with error (sct=0, sc=8) 00:39:06.841 Write completed with error (sct=0, sc=8) 00:39:06.841 Read completed with error (sct=0, sc=8) 00:39:06.841 Read completed with error (sct=0, sc=8) 00:39:06.841 Read completed with error (sct=0, sc=8) 00:39:06.841 Read completed with error (sct=0, sc=8) 00:39:06.841 Write completed with error (sct=0, sc=8) 00:39:06.841 Read completed with error (sct=0, sc=8) 00:39:06.841 Read completed with error (sct=0, sc=8) 00:39:06.841 Write completed with error (sct=0, sc=8) 00:39:06.841 Write completed with error (sct=0, sc=8) 00:39:06.841 Read completed with error (sct=0, sc=8) 00:39:06.841 Write completed with error (sct=0, sc=8) 00:39:06.841 Write completed with error (sct=0, sc=8) 00:39:06.841 Write completed with error (sct=0, sc=8) 00:39:06.842 Write completed with error (sct=0, sc=8) 00:39:06.842 Read completed with error (sct=0, sc=8) 00:39:06.842 Write completed with error (sct=0, sc=8) 00:39:06.842 Read completed with error (sct=0, sc=8) 00:39:06.842 Read completed with error (sct=0, sc=8) 00:39:06.842 Write completed with error (sct=0, sc=8) 00:39:06.842 starting I/O failed: -6 00:39:06.842 Read completed with error (sct=0, sc=8) 00:39:06.842 Write completed with error (sct=0, sc=8) 00:39:06.842 Write completed with error (sct=0, sc=8) 00:39:06.842 Write completed with error (sct=0, sc=8) 00:39:06.842 starting I/O failed: -6 00:39:06.842 Read completed with error (sct=0, sc=8) 00:39:06.842 Write completed with error (sct=0, sc=8) 00:39:06.842 Read completed with error (sct=0, sc=8) 00:39:06.842 Write completed with error (sct=0, sc=8) 00:39:06.842 starting I/O failed: -6 00:39:06.842 Read completed with error (sct=0, sc=8) 00:39:06.842 Read completed with error (sct=0, sc=8) 00:39:06.842 Write completed with error (sct=0, sc=8) 00:39:06.842 Write completed with error (sct=0, sc=8) 00:39:06.842 starting I/O failed: -6 00:39:06.842 Read completed with error (sct=0, sc=8) 00:39:06.842 Read completed with error (sct=0, sc=8) 00:39:06.842 Read completed with error (sct=0, sc=8) 00:39:06.842 Read completed with error (sct=0, sc=8) 00:39:06.842 starting I/O failed: -6 00:39:06.842 Read completed with error (sct=0, sc=8) 00:39:06.842 Read completed with error (sct=0, sc=8) 00:39:06.842 Read completed with error (sct=0, sc=8) 00:39:06.842 Write completed with error (sct=0, sc=8) 00:39:06.842 starting I/O failed: -6 00:39:06.842 Write completed with error (sct=0, sc=8) 00:39:06.842 Write completed with error (sct=0, sc=8) 00:39:06.842 Write completed with error (sct=0, sc=8) 00:39:06.842 Write completed with error (sct=0, sc=8) 00:39:06.842 starting I/O failed: -6 00:39:06.842 Write completed with error (sct=0, sc=8) 00:39:06.842 Read completed with error (sct=0, sc=8) 00:39:06.842 Read completed with error (sct=0, sc=8) 00:39:06.842 Read completed with error (sct=0, sc=8) 00:39:06.842 starting I/O failed: -6 00:39:06.842 Read completed with error (sct=0, sc=8) 00:39:06.842 Read completed with error (sct=0, sc=8) 00:39:06.842 Read completed with error (sct=0, sc=8) 00:39:06.842 Read completed with error (sct=0, sc=8) 00:39:06.842 starting I/O failed: -6 00:39:06.842 Write completed with error (sct=0, sc=8) 00:39:06.842 Read completed with error (sct=0, sc=8) 00:39:06.842 [2024-07-22 13:01:26.066517] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ff500000c00 is same with the state(5) to be set 00:39:06.842 Read completed with error (sct=0, sc=8) 00:39:06.842 Read completed with error (sct=0, sc=8) 00:39:06.842 Read completed with error (sct=0, sc=8) 00:39:06.842 Read completed with error (sct=0, sc=8) 00:39:06.842 Read completed with error (sct=0, sc=8) 00:39:06.842 Read completed with error (sct=0, sc=8) 00:39:06.842 Read completed with error (sct=0, sc=8) 00:39:06.842 Read completed with error (sct=0, sc=8) 00:39:06.842 Read completed with error (sct=0, sc=8) 00:39:06.842 Read completed with error (sct=0, sc=8) 00:39:06.842 Read completed with error (sct=0, sc=8) 00:39:06.842 Read completed with error (sct=0, sc=8) 00:39:06.842 Read completed with error (sct=0, sc=8) 00:39:06.842 Write completed with error (sct=0, sc=8) 00:39:06.842 Write completed with error (sct=0, sc=8) 00:39:06.842 Read completed with error (sct=0, sc=8) 00:39:06.842 Read completed with error (sct=0, sc=8) 00:39:06.842 Read completed with error (sct=0, sc=8) 00:39:06.842 Write completed with error (sct=0, sc=8) 00:39:06.842 Read completed with error (sct=0, sc=8) 00:39:06.842 Write completed with error (sct=0, sc=8) 00:39:06.842 Read completed with error (sct=0, sc=8) 00:39:06.842 Write completed with error (sct=0, sc=8) 00:39:06.842 Read completed with error (sct=0, sc=8) 00:39:06.842 Write completed with error (sct=0, sc=8) 00:39:06.842 Write completed with error (sct=0, sc=8) 00:39:06.842 Read completed with error (sct=0, sc=8) 00:39:06.842 Read completed with error (sct=0, sc=8) 00:39:06.842 Write completed with error (sct=0, sc=8) 00:39:06.842 Read completed with error (sct=0, sc=8) 00:39:06.842 Read completed with error (sct=0, sc=8) 00:39:06.842 Read completed with error (sct=0, sc=8) 00:39:06.842 Read completed with error (sct=0, sc=8) 00:39:06.842 Read completed with error (sct=0, sc=8) 00:39:06.842 Read completed with error (sct=0, sc=8) 00:39:06.842 Write completed with error (sct=0, sc=8) 00:39:06.842 Read completed with error (sct=0, sc=8) 00:39:06.842 Read completed with error (sct=0, sc=8) 00:39:06.842 Read completed with error (sct=0, sc=8) 00:39:06.842 Write completed with error (sct=0, sc=8) 00:39:06.842 Read completed with error (sct=0, sc=8) 00:39:06.842 Read completed with error (sct=0, sc=8) 00:39:06.842 Write completed with error (sct=0, sc=8) 00:39:06.842 Read completed with error (sct=0, sc=8) 00:39:06.842 [2024-07-22 13:01:26.067030] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ff50000c350 is same with the state(5) to be set 00:39:07.777 [2024-07-22 13:01:27.042638] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b028c0 is same with the state(5) to be set 00:39:07.777 Read completed with error (sct=0, sc=8) 00:39:07.777 Write completed with error (sct=0, sc=8) 00:39:07.777 Read completed with error (sct=0, sc=8) 00:39:07.777 Read completed with error (sct=0, sc=8) 00:39:07.777 Write completed with error (sct=0, sc=8) 00:39:07.777 Read completed with error (sct=0, sc=8) 00:39:07.777 Write completed with error (sct=0, sc=8) 00:39:07.777 Read completed with error (sct=0, sc=8) 00:39:07.777 Read completed with error (sct=0, sc=8) 00:39:07.777 Read completed with error (sct=0, sc=8) 00:39:07.777 Write completed with error (sct=0, sc=8) 00:39:07.777 Write completed with error (sct=0, sc=8) 00:39:07.777 Write completed with error (sct=0, sc=8) 00:39:07.777 Read completed with error (sct=0, sc=8) 00:39:07.777 Read completed with error (sct=0, sc=8) 00:39:07.777 Write completed with error (sct=0, sc=8) 00:39:07.777 Read completed with error (sct=0, sc=8) 00:39:07.777 Read completed with error (sct=0, sc=8) 00:39:07.777 Read completed with error (sct=0, sc=8) 00:39:07.777 Read completed with error (sct=0, sc=8) 00:39:07.777 Read completed with error (sct=0, sc=8) 00:39:07.777 Write completed with error (sct=0, sc=8) 00:39:07.777 Read completed with error (sct=0, sc=8) 00:39:07.777 Read completed with error (sct=0, sc=8) 00:39:07.777 Read completed with error (sct=0, sc=8) 00:39:07.777 Read completed with error (sct=0, sc=8) 00:39:07.777 [2024-07-22 13:01:27.065001] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b00350 is same with the state(5) to be set 00:39:07.777 Read completed with error (sct=0, sc=8) 00:39:07.777 Read completed with error (sct=0, sc=8) 00:39:07.777 Write completed with error (sct=0, sc=8) 00:39:07.777 Read completed with error (sct=0, sc=8) 00:39:07.777 Read completed with error (sct=0, sc=8) 00:39:07.777 Read completed with error (sct=0, sc=8) 00:39:07.777 Read completed with error (sct=0, sc=8) 00:39:07.777 Read completed with error (sct=0, sc=8) 00:39:07.777 Read completed with error (sct=0, sc=8) 00:39:07.777 Write completed with error (sct=0, sc=8) 00:39:07.777 Read completed with error (sct=0, sc=8) 00:39:07.777 Read completed with error (sct=0, sc=8) 00:39:07.777 Read completed with error (sct=0, sc=8) 00:39:07.777 Read completed with error (sct=0, sc=8) 00:39:07.777 Read completed with error (sct=0, sc=8) 00:39:07.777 Read completed with error (sct=0, sc=8) 00:39:07.777 Read completed with error (sct=0, sc=8) 00:39:07.777 Write completed with error (sct=0, sc=8) 00:39:07.777 Write completed with error (sct=0, sc=8) 00:39:07.777 Read completed with error (sct=0, sc=8) 00:39:07.777 Read completed with error (sct=0, sc=8) 00:39:07.777 Read completed with error (sct=0, sc=8) 00:39:07.777 Read completed with error (sct=0, sc=8) 00:39:07.777 Write completed with error (sct=0, sc=8) 00:39:07.777 Read completed with error (sct=0, sc=8) 00:39:07.777 Read completed with error (sct=0, sc=8) 00:39:07.777 [2024-07-22 13:01:27.065375] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b008b0 is same with the state(5) to be set 00:39:07.777 Read completed with error (sct=0, sc=8) 00:39:07.777 Read completed with error (sct=0, sc=8) 00:39:07.777 Write completed with error (sct=0, sc=8) 00:39:07.777 Write completed with error (sct=0, sc=8) 00:39:07.777 Read completed with error (sct=0, sc=8) 00:39:07.777 Read completed with error (sct=0, sc=8) 00:39:07.778 Write completed with error (sct=0, sc=8) 00:39:07.778 Read completed with error (sct=0, sc=8) 00:39:07.778 Read completed with error (sct=0, sc=8) 00:39:07.778 Read completed with error (sct=0, sc=8) 00:39:07.778 Read completed with error (sct=0, sc=8) 00:39:07.778 Read completed with error (sct=0, sc=8) 00:39:07.778 Read completed with error (sct=0, sc=8) 00:39:07.778 Write completed with error (sct=0, sc=8) 00:39:07.778 Read completed with error (sct=0, sc=8) 00:39:07.778 Read completed with error (sct=0, sc=8) 00:39:07.778 Read completed with error (sct=0, sc=8) 00:39:07.778 [2024-07-22 13:01:27.066782] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ff50000c600 is same with the state(5) to be set 00:39:07.778 Read completed with error (sct=0, sc=8) 00:39:07.778 Read completed with error (sct=0, sc=8) 00:39:07.778 Write completed with error (sct=0, sc=8) 00:39:07.778 Read completed with error (sct=0, sc=8) 00:39:07.778 Read completed with error (sct=0, sc=8) 00:39:07.778 Read completed with error (sct=0, sc=8) 00:39:07.778 Read completed with error (sct=0, sc=8) 00:39:07.778 Read completed with error (sct=0, sc=8) 00:39:07.778 Read completed with error (sct=0, sc=8) 00:39:07.778 Read completed with error (sct=0, sc=8) 00:39:07.778 Read completed with error (sct=0, sc=8) 00:39:07.778 Write completed with error (sct=0, sc=8) 00:39:07.778 Read completed with error (sct=0, sc=8) 00:39:07.778 Read completed with error (sct=0, sc=8) 00:39:07.778 Read completed with error (sct=0, sc=8) 00:39:07.778 Read completed with error (sct=0, sc=8) 00:39:07.778 Read completed with error (sct=0, sc=8) 00:39:07.778 Read completed with error (sct=0, sc=8) 00:39:07.778 [2024-07-22 13:01:27.066998] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ff50000bf20 is same with the state(5) to be set 00:39:07.778 [2024-07-22 13:01:27.067992] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b028c0 (9): Bad file descriptor 00:39:07.778 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:39:07.778 13:01:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:39:07.778 13:01:27 -- target/delete_subsystem.sh@34 -- # delay=0 00:39:07.778 13:01:27 -- target/delete_subsystem.sh@35 -- # kill -0 81904 00:39:07.778 13:01:27 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:39:07.778 Initializing NVMe Controllers 00:39:07.778 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:39:07.778 Controller IO queue size 128, less than required. 00:39:07.778 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:39:07.778 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:39:07.778 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:39:07.778 Initialization complete. Launching workers. 00:39:07.778 ======================================================== 00:39:07.778 Latency(us) 00:39:07.778 Device Information : IOPS MiB/s Average min max 00:39:07.778 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 172.87 0.08 890022.82 430.24 1011214.85 00:39:07.778 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 146.04 0.07 996578.35 517.23 2002102.12 00:39:07.778 ======================================================== 00:39:07.778 Total : 318.91 0.16 938819.28 430.24 2002102.12 00:39:07.778 00:39:08.344 13:01:27 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:39:08.345 13:01:27 -- target/delete_subsystem.sh@35 -- # kill -0 81904 00:39:08.345 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (81904) - No such process 00:39:08.345 13:01:27 -- target/delete_subsystem.sh@45 -- # NOT wait 81904 00:39:08.345 13:01:27 -- common/autotest_common.sh@640 -- # local es=0 00:39:08.345 13:01:27 -- common/autotest_common.sh@642 -- # valid_exec_arg wait 81904 00:39:08.345 13:01:27 -- common/autotest_common.sh@628 -- # local arg=wait 00:39:08.345 13:01:27 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:39:08.345 13:01:27 -- common/autotest_common.sh@632 -- # type -t wait 00:39:08.345 13:01:27 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:39:08.345 13:01:27 -- common/autotest_common.sh@643 -- # wait 81904 00:39:08.345 13:01:27 -- common/autotest_common.sh@643 -- # es=1 00:39:08.345 13:01:27 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:39:08.345 13:01:27 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:39:08.345 13:01:27 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:39:08.345 13:01:27 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:39:08.345 13:01:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:39:08.345 13:01:27 -- common/autotest_common.sh@10 -- # set +x 00:39:08.345 13:01:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:39:08.345 13:01:27 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:08.345 13:01:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:39:08.345 13:01:27 -- common/autotest_common.sh@10 -- # set +x 00:39:08.345 [2024-07-22 13:01:27.594079] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:08.345 13:01:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:39:08.345 13:01:27 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:08.345 13:01:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:39:08.345 13:01:27 -- common/autotest_common.sh@10 -- # set +x 00:39:08.345 13:01:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:39:08.345 13:01:27 -- target/delete_subsystem.sh@54 -- # perf_pid=81951 00:39:08.345 13:01:27 -- target/delete_subsystem.sh@56 -- # delay=0 00:39:08.345 13:01:27 -- target/delete_subsystem.sh@57 -- # kill -0 81951 00:39:08.345 13:01:27 -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:39:08.345 13:01:27 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:08.603 [2024-07-22 13:01:27.773558] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:39:08.861 13:01:28 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:08.861 13:01:28 -- target/delete_subsystem.sh@57 -- # kill -0 81951 00:39:08.861 13:01:28 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:09.428 13:01:28 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:09.428 13:01:28 -- target/delete_subsystem.sh@57 -- # kill -0 81951 00:39:09.428 13:01:28 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:10.013 13:01:29 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:10.013 13:01:29 -- target/delete_subsystem.sh@57 -- # kill -0 81951 00:39:10.013 13:01:29 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:10.271 13:01:29 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:10.271 13:01:29 -- target/delete_subsystem.sh@57 -- # kill -0 81951 00:39:10.271 13:01:29 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:10.837 13:01:30 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:10.837 13:01:30 -- target/delete_subsystem.sh@57 -- # kill -0 81951 00:39:10.837 13:01:30 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:11.404 13:01:30 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:11.404 13:01:30 -- target/delete_subsystem.sh@57 -- # kill -0 81951 00:39:11.404 13:01:30 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:11.404 Initializing NVMe Controllers 00:39:11.404 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:39:11.404 Controller IO queue size 128, less than required. 00:39:11.404 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:39:11.404 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:39:11.404 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:39:11.404 Initialization complete. Launching workers. 00:39:11.404 ======================================================== 00:39:11.404 Latency(us) 00:39:11.404 Device Information : IOPS MiB/s Average min max 00:39:11.404 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004694.22 1000105.55 1012787.19 00:39:11.404 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003492.01 1000166.77 1012770.11 00:39:11.404 ======================================================== 00:39:11.404 Total : 256.00 0.12 1004093.12 1000105.55 1012787.19 00:39:11.404 00:39:11.972 13:01:31 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:11.972 13:01:31 -- target/delete_subsystem.sh@57 -- # kill -0 81951 00:39:11.972 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (81951) - No such process 00:39:11.972 13:01:31 -- target/delete_subsystem.sh@67 -- # wait 81951 00:39:11.972 13:01:31 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:39:11.972 13:01:31 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:39:11.972 13:01:31 -- nvmf/common.sh@476 -- # nvmfcleanup 00:39:11.972 13:01:31 -- nvmf/common.sh@116 -- # sync 00:39:11.972 13:01:31 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:39:11.972 13:01:31 -- nvmf/common.sh@119 -- # set +e 00:39:11.972 13:01:31 -- nvmf/common.sh@120 -- # for i in {1..20} 00:39:11.972 13:01:31 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:39:11.972 rmmod nvme_tcp 00:39:11.972 rmmod nvme_fabrics 00:39:11.972 rmmod nvme_keyring 00:39:11.972 13:01:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:39:11.972 13:01:31 -- nvmf/common.sh@123 -- # set -e 00:39:11.972 13:01:31 -- nvmf/common.sh@124 -- # return 0 00:39:11.972 13:01:31 -- nvmf/common.sh@477 -- # '[' -n 81853 ']' 00:39:11.972 13:01:31 -- nvmf/common.sh@478 -- # killprocess 81853 00:39:11.972 13:01:31 -- common/autotest_common.sh@926 -- # '[' -z 81853 ']' 00:39:11.972 13:01:31 -- common/autotest_common.sh@930 -- # kill -0 81853 00:39:11.972 13:01:31 -- common/autotest_common.sh@931 -- # uname 00:39:11.972 13:01:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:39:11.972 13:01:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 81853 00:39:11.972 13:01:31 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:39:11.972 killing process with pid 81853 00:39:11.972 13:01:31 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:39:11.972 13:01:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 81853' 00:39:11.972 13:01:31 -- common/autotest_common.sh@945 -- # kill 81853 00:39:11.972 13:01:31 -- common/autotest_common.sh@950 -- # wait 81853 00:39:12.230 13:01:31 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:39:12.230 13:01:31 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:39:12.230 13:01:31 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:39:12.230 13:01:31 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:39:12.230 13:01:31 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:39:12.230 13:01:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:12.230 13:01:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:39:12.230 13:01:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:12.230 13:01:31 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:39:12.230 ************************************ 00:39:12.230 END TEST nvmf_delete_subsystem 00:39:12.230 ************************************ 00:39:12.230 00:39:12.230 real 0m9.243s 00:39:12.230 user 0m28.669s 00:39:12.230 sys 0m1.554s 00:39:12.230 13:01:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:12.230 13:01:31 -- common/autotest_common.sh@10 -- # set +x 00:39:12.230 13:01:31 -- nvmf/nvmf.sh@36 -- # [[ 0 -eq 1 ]] 00:39:12.230 13:01:31 -- nvmf/nvmf.sh@39 -- # [[ 0 -eq 1 ]] 00:39:12.230 13:01:31 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:39:12.230 13:01:31 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:39:12.230 13:01:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:39:12.230 13:01:31 -- common/autotest_common.sh@10 -- # set +x 00:39:12.230 ************************************ 00:39:12.230 START TEST nvmf_host_management 00:39:12.230 ************************************ 00:39:12.230 13:01:31 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:39:12.489 * Looking for test storage... 00:39:12.489 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:39:12.489 13:01:31 -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:39:12.489 13:01:31 -- nvmf/common.sh@7 -- # uname -s 00:39:12.489 13:01:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:12.489 13:01:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:12.489 13:01:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:12.489 13:01:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:12.489 13:01:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:12.489 13:01:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:12.489 13:01:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:12.489 13:01:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:12.489 13:01:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:12.489 13:01:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:12.489 13:01:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:864ae4a3-cd96-439b-8ef2-6dab4d992115 00:39:12.489 13:01:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=864ae4a3-cd96-439b-8ef2-6dab4d992115 00:39:12.489 13:01:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:12.489 13:01:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:12.489 13:01:31 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:39:12.489 13:01:31 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:39:12.489 13:01:31 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:12.489 13:01:31 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:12.489 13:01:31 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:12.489 13:01:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:12.489 13:01:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:12.489 13:01:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:12.489 13:01:31 -- paths/export.sh@5 -- # export PATH 00:39:12.489 13:01:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:12.489 13:01:31 -- nvmf/common.sh@46 -- # : 0 00:39:12.489 13:01:31 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:39:12.489 13:01:31 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:39:12.489 13:01:31 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:39:12.489 13:01:31 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:12.489 13:01:31 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:12.489 13:01:31 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:39:12.490 13:01:31 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:39:12.490 13:01:31 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:39:12.490 13:01:31 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:12.490 13:01:31 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:12.490 13:01:31 -- target/host_management.sh@104 -- # nvmftestinit 00:39:12.490 13:01:31 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:39:12.490 13:01:31 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:12.490 13:01:31 -- nvmf/common.sh@436 -- # prepare_net_devs 00:39:12.490 13:01:31 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:39:12.490 13:01:31 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:39:12.490 13:01:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:12.490 13:01:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:39:12.490 13:01:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:12.490 13:01:31 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:39:12.490 13:01:31 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:39:12.490 13:01:31 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:39:12.490 13:01:31 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:39:12.490 13:01:31 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:39:12.490 13:01:31 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:39:12.490 13:01:31 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:12.490 13:01:31 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:12.490 13:01:31 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:39:12.490 13:01:31 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:39:12.490 13:01:31 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:39:12.490 13:01:31 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:39:12.490 13:01:31 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:39:12.490 13:01:31 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:12.490 13:01:31 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:39:12.490 13:01:31 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:39:12.490 13:01:31 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:39:12.490 13:01:31 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:39:12.490 13:01:31 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:39:12.490 13:01:31 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:39:12.490 Cannot find device "nvmf_tgt_br" 00:39:12.490 13:01:31 -- nvmf/common.sh@154 -- # true 00:39:12.490 13:01:31 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:39:12.490 Cannot find device "nvmf_tgt_br2" 00:39:12.490 13:01:31 -- nvmf/common.sh@155 -- # true 00:39:12.490 13:01:31 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:39:12.490 13:01:31 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:39:12.490 Cannot find device "nvmf_tgt_br" 00:39:12.490 13:01:31 -- nvmf/common.sh@157 -- # true 00:39:12.490 13:01:31 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:39:12.490 Cannot find device "nvmf_tgt_br2" 00:39:12.490 13:01:31 -- nvmf/common.sh@158 -- # true 00:39:12.490 13:01:31 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:39:12.490 13:01:31 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:39:12.490 13:01:31 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:39:12.490 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:39:12.490 13:01:31 -- nvmf/common.sh@161 -- # true 00:39:12.490 13:01:31 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:39:12.490 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:39:12.490 13:01:31 -- nvmf/common.sh@162 -- # true 00:39:12.490 13:01:31 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:39:12.490 13:01:31 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:39:12.490 13:01:31 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:39:12.490 13:01:31 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:39:12.490 13:01:31 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:39:12.490 13:01:31 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:39:12.490 13:01:31 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:39:12.490 13:01:31 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:39:12.490 13:01:31 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:39:12.490 13:01:31 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:39:12.490 13:01:31 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:39:12.490 13:01:31 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:39:12.490 13:01:31 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:39:12.490 13:01:31 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:39:12.749 13:01:31 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:39:12.749 13:01:31 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:39:12.749 13:01:31 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:39:12.749 13:01:31 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:39:12.749 13:01:31 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:39:12.749 13:01:31 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:39:12.749 13:01:31 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:39:12.749 13:01:31 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:39:12.749 13:01:31 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:39:12.749 13:01:31 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:39:12.749 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:12.749 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:39:12.749 00:39:12.749 --- 10.0.0.2 ping statistics --- 00:39:12.749 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:12.749 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:39:12.749 13:01:31 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:39:12.749 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:39:12.749 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:39:12.749 00:39:12.749 --- 10.0.0.3 ping statistics --- 00:39:12.749 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:12.749 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:39:12.749 13:01:31 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:39:12.749 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:12.749 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:39:12.749 00:39:12.749 --- 10.0.0.1 ping statistics --- 00:39:12.749 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:12.749 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:39:12.749 13:01:31 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:12.749 13:01:31 -- nvmf/common.sh@421 -- # return 0 00:39:12.749 13:01:31 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:39:12.749 13:01:31 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:12.749 13:01:31 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:39:12.749 13:01:31 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:39:12.749 13:01:31 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:12.749 13:01:31 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:39:12.749 13:01:31 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:39:12.749 13:01:32 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:39:12.749 13:01:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:39:12.749 13:01:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:39:12.749 13:01:32 -- common/autotest_common.sh@10 -- # set +x 00:39:12.749 ************************************ 00:39:12.749 START TEST nvmf_host_management 00:39:12.749 ************************************ 00:39:12.749 13:01:32 -- common/autotest_common.sh@1104 -- # nvmf_host_management 00:39:12.749 13:01:32 -- target/host_management.sh@69 -- # starttarget 00:39:12.749 13:01:32 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:39:12.749 13:01:32 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:39:12.749 13:01:32 -- common/autotest_common.sh@712 -- # xtrace_disable 00:39:12.749 13:01:32 -- common/autotest_common.sh@10 -- # set +x 00:39:12.749 13:01:32 -- nvmf/common.sh@469 -- # nvmfpid=82185 00:39:12.749 13:01:32 -- nvmf/common.sh@470 -- # waitforlisten 82185 00:39:12.749 13:01:32 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:39:12.749 13:01:32 -- common/autotest_common.sh@819 -- # '[' -z 82185 ']' 00:39:12.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:12.749 13:01:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:12.749 13:01:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:39:12.749 13:01:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:12.749 13:01:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:39:12.749 13:01:32 -- common/autotest_common.sh@10 -- # set +x 00:39:12.749 [2024-07-22 13:01:32.081229] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:39:12.749 [2024-07-22 13:01:32.081312] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:13.008 [2024-07-22 13:01:32.222517] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:13.008 [2024-07-22 13:01:32.283227] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:39:13.008 [2024-07-22 13:01:32.283375] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:13.008 [2024-07-22 13:01:32.283388] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:13.008 [2024-07-22 13:01:32.283395] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:13.008 [2024-07-22 13:01:32.283576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:39:13.008 [2024-07-22 13:01:32.283867] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:39:13.008 [2024-07-22 13:01:32.284297] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:39:13.008 [2024-07-22 13:01:32.284301] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:39:13.576 13:01:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:39:13.576 13:01:32 -- common/autotest_common.sh@852 -- # return 0 00:39:13.576 13:01:32 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:39:13.576 13:01:32 -- common/autotest_common.sh@718 -- # xtrace_disable 00:39:13.576 13:01:32 -- common/autotest_common.sh@10 -- # set +x 00:39:13.576 13:01:32 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:13.576 13:01:32 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:13.576 13:01:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:39:13.576 13:01:32 -- common/autotest_common.sh@10 -- # set +x 00:39:13.576 [2024-07-22 13:01:32.992870] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:13.840 13:01:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:39:13.840 13:01:33 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:39:13.840 13:01:33 -- common/autotest_common.sh@712 -- # xtrace_disable 00:39:13.840 13:01:33 -- common/autotest_common.sh@10 -- # set +x 00:39:13.840 13:01:33 -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:39:13.840 13:01:33 -- target/host_management.sh@23 -- # cat 00:39:13.840 13:01:33 -- target/host_management.sh@30 -- # rpc_cmd 00:39:13.841 13:01:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:39:13.841 13:01:33 -- common/autotest_common.sh@10 -- # set +x 00:39:13.841 Malloc0 00:39:13.841 [2024-07-22 13:01:33.079485] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:13.841 13:01:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:39:13.841 13:01:33 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:39:13.841 13:01:33 -- common/autotest_common.sh@718 -- # xtrace_disable 00:39:13.841 13:01:33 -- common/autotest_common.sh@10 -- # set +x 00:39:13.841 13:01:33 -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:39:13.841 13:01:33 -- target/host_management.sh@73 -- # perfpid=82257 00:39:13.841 13:01:33 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:39:13.841 13:01:33 -- target/host_management.sh@74 -- # waitforlisten 82257 /var/tmp/bdevperf.sock 00:39:13.841 13:01:33 -- nvmf/common.sh@520 -- # config=() 00:39:13.841 13:01:33 -- common/autotest_common.sh@819 -- # '[' -z 82257 ']' 00:39:13.841 13:01:33 -- nvmf/common.sh@520 -- # local subsystem config 00:39:13.841 13:01:33 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:39:13.841 13:01:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:39:13.841 13:01:33 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:39:13.841 { 00:39:13.841 "params": { 00:39:13.841 "name": "Nvme$subsystem", 00:39:13.841 "trtype": "$TEST_TRANSPORT", 00:39:13.841 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:13.841 "adrfam": "ipv4", 00:39:13.841 "trsvcid": "$NVMF_PORT", 00:39:13.841 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:13.841 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:13.841 "hdgst": ${hdgst:-false}, 00:39:13.841 "ddgst": ${ddgst:-false} 00:39:13.841 }, 00:39:13.841 "method": "bdev_nvme_attach_controller" 00:39:13.841 } 00:39:13.841 EOF 00:39:13.841 )") 00:39:13.841 13:01:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:39:13.841 13:01:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:39:13.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:39:13.841 13:01:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:39:13.841 13:01:33 -- common/autotest_common.sh@10 -- # set +x 00:39:13.841 13:01:33 -- nvmf/common.sh@542 -- # cat 00:39:13.841 13:01:33 -- nvmf/common.sh@544 -- # jq . 00:39:13.841 13:01:33 -- nvmf/common.sh@545 -- # IFS=, 00:39:13.841 13:01:33 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:39:13.841 "params": { 00:39:13.841 "name": "Nvme0", 00:39:13.841 "trtype": "tcp", 00:39:13.841 "traddr": "10.0.0.2", 00:39:13.841 "adrfam": "ipv4", 00:39:13.841 "trsvcid": "4420", 00:39:13.841 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:13.841 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:13.841 "hdgst": false, 00:39:13.841 "ddgst": false 00:39:13.841 }, 00:39:13.841 "method": "bdev_nvme_attach_controller" 00:39:13.841 }' 00:39:13.841 [2024-07-22 13:01:33.180359] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:39:13.841 [2024-07-22 13:01:33.180442] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82257 ] 00:39:14.098 [2024-07-22 13:01:33.323804] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:14.098 [2024-07-22 13:01:33.396767] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:14.357 Running I/O for 10 seconds... 00:39:14.927 13:01:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:39:14.927 13:01:34 -- common/autotest_common.sh@852 -- # return 0 00:39:14.927 13:01:34 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:39:14.927 13:01:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:39:14.927 13:01:34 -- common/autotest_common.sh@10 -- # set +x 00:39:14.927 13:01:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:39:14.927 13:01:34 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:39:14.927 13:01:34 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:39:14.927 13:01:34 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:39:14.927 13:01:34 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:39:14.927 13:01:34 -- target/host_management.sh@52 -- # local ret=1 00:39:14.927 13:01:34 -- target/host_management.sh@53 -- # local i 00:39:14.927 13:01:34 -- target/host_management.sh@54 -- # (( i = 10 )) 00:39:14.927 13:01:34 -- target/host_management.sh@54 -- # (( i != 0 )) 00:39:14.927 13:01:34 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:39:14.927 13:01:34 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:39:14.927 13:01:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:39:14.927 13:01:34 -- common/autotest_common.sh@10 -- # set +x 00:39:14.927 13:01:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:39:14.927 13:01:34 -- target/host_management.sh@55 -- # read_io_count=2249 00:39:14.927 13:01:34 -- target/host_management.sh@58 -- # '[' 2249 -ge 100 ']' 00:39:14.927 13:01:34 -- target/host_management.sh@59 -- # ret=0 00:39:14.927 13:01:34 -- target/host_management.sh@60 -- # break 00:39:14.927 13:01:34 -- target/host_management.sh@64 -- # return 0 00:39:14.927 13:01:34 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:39:14.927 13:01:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:39:14.927 13:01:34 -- common/autotest_common.sh@10 -- # set +x 00:39:14.927 [2024-07-22 13:01:34.256639] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c48d00 is same with the state(5) to be set 00:39:14.927 [2024-07-22 13:01:34.257116] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c48d00 is same with the state(5) to be set 00:39:14.927 [2024-07-22 13:01:34.257553] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c48d00 is same with the state(5) to be set 00:39:14.927 [2024-07-22 13:01:34.257573] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c48d00 is same with the state(5) to be set 00:39:14.927 [2024-07-22 13:01:34.257582] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c48d00 is same with the state(5) to be set 00:39:14.927 [2024-07-22 13:01:34.257577] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:39:14.927 [2024-07-22 13:01:34.257592] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c48d00 is same with the state(5) to be set 00:39:14.927 [2024-07-22 13:01:34.257600] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c48d00 is same with the state(5) to be set 00:39:14.927 [2024-07-22 13:01:34.257609] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c48d00 is same with the state(5) to be set 00:39:14.927 [2024-07-22 13:01:34.257614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:14.927 [2024-07-22 13:01:34.257632] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c48d00 is same with the state(5) to be set 00:39:14.927 [2024-07-22 13:01:34.257641] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c48d00 is same with the state(5) to be set 00:39:14.927 [2024-07-22 13:01:34.257649] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c48d00 is same with [2024-07-22 13:01:34.257649] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsthe state(5) to be set 00:39:14.927 id:0 cdw10:00000000 cdw11:00000000 00:39:14.927 [2024-07-22 13:01:34.257658] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c48d00 is same with the state(5) to be set 00:39:14.927 [2024-07-22 13:01:34.257659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:14.927 [2024-07-22 13:01:34.257666] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c48d00 is same with the state(5) to be set 00:39:14.927 [2024-07-22 13:01:34.257670] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:39:14.927 [2024-07-22 13:01:34.257676] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c48d00 is same with the state(5) to be set 00:39:14.927 [2024-07-22 13:01:34.257679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:14.927 [2024-07-22 13:01:34.257685] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c48d00 is same with the state(5) to be set 00:39:14.927 [2024-07-22 13:01:34.257689] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:39:14.927 [2024-07-22 13:01:34.257694] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c48d00 is same with the state(5) to be set 00:39:14.927 [2024-07-22 13:01:34.257698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:14.927 [2024-07-22 13:01:34.257702] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c48d00 is same with the state(5) to be set 00:39:14.927 [2024-07-22 13:01:34.257708] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x172b7f0 is same with the state(5) to be set 00:39:14.927 [2024-07-22 13:01:34.257711] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c48d00 is same with the state(5) to be set 00:39:14.927 [2024-07-22 13:01:34.257720] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c48d00 is same with the state(5) to be set 00:39:14.927 [2024-07-22 13:01:34.257728] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c48d00 is same with the state(5) to be set 00:39:14.927 [2024-07-22 13:01:34.257738] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c48d00 is same with the state(5) to be set 00:39:14.927 [2024-07-22 13:01:34.257762] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c48d00 is same with the state(5) to be set 00:39:14.927 [2024-07-22 13:01:34.257770] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c48d00 is same with the state(5) to be set 00:39:14.927 [2024-07-22 13:01:34.257777] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c48d00 is same with the state(5) to be set 00:39:14.927 [2024-07-22 13:01:34.257785] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c48d00 is same with the state(5) to be set 00:39:14.927 [2024-07-22 13:01:34.257793] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c48d00 is same with the state(5) to be set 00:39:14.927 [2024-07-22 13:01:34.257801] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c48d00 is same with the state(5) to be set 00:39:14.927 [2024-07-22 13:01:34.257808] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c48d00 is same with the state(5) to be set 00:39:14.927 [2024-07-22 13:01:34.257815] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c48d00 is same with the state(5) to be set 00:39:14.927 [2024-07-22 13:01:34.257830] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c48d00 is same with the state(5) to be set 00:39:14.927 [2024-07-22 13:01:34.257838] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c48d00 is same with the state(5) to be set 00:39:14.927 [2024-07-22 13:01:34.257846] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c48d00 is same with the state(5) to be set 00:39:14.927 [2024-07-22 13:01:34.257853] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c48d00 is same with the state(5) to be set 00:39:14.927 [2024-07-22 13:01:34.257861] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c48d00 is same with the state(5) to be set 00:39:14.927 [2024-07-22 13:01:34.257869] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c48d00 is same with the state(5) to be set 00:39:14.927 [2024-07-22 13:01:34.257876] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c48d00 is same with the state(5) to be set 00:39:14.927 [2024-07-22 13:01:34.257884] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c48d00 is same with the state(5) to be set 00:39:14.927 [2024-07-22 13:01:34.257892] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c48d00 is same with the state(5) to be set 00:39:14.927 [2024-07-22 13:01:34.257900] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c48d00 is same with the state(5) to be set 00:39:14.927 [2024-07-22 13:01:34.257907] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c48d00 is same with the state(5) to be set 00:39:14.927 [2024-07-22 13:01:34.257914] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c48d00 is same with the state(5) to be set 00:39:14.927 [2024-07-22 13:01:34.257922] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c48d00 is same with the state(5) to be set 00:39:14.927 [2024-07-22 13:01:34.257930] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c48d00 is same with the state(5) to be set 00:39:14.928 [2024-07-22 13:01:34.257937] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c48d00 is same with the state(5) to be set 00:39:14.928 [2024-07-22 13:01:34.257945] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c48d00 is same with the state(5) to be set 00:39:14.928 [2024-07-22 13:01:34.258067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:48000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.928 [2024-07-22 13:01:34.258085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:14.928 [2024-07-22 13:01:34.258111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:48128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.928 [2024-07-22 13:01:34.258123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:14.928 [2024-07-22 13:01:34.258134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:42624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.928 [2024-07-22 13:01:34.258172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:14.928 [2024-07-22 13:01:34.258184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:42880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.928 [2024-07-22 13:01:34.258194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:14.928 [2024-07-22 13:01:34.258205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:43008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.928 [2024-07-22 13:01:34.258215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:14.928 [2024-07-22 13:01:34.258226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:43136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.928 [2024-07-22 13:01:34.258235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:14.928 [2024-07-22 13:01:34.258246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:48384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.928 [2024-07-22 13:01:34.258255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:14.928 [2024-07-22 13:01:34.258266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:48512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.928 [2024-07-22 13:01:34.258275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:14.928 [2024-07-22 13:01:34.258286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:48640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.928 [2024-07-22 13:01:34.258295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:14.928 [2024-07-22 13:01:34.258306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:48768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.928 [2024-07-22 13:01:34.258320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:14.928 [2024-07-22 13:01:34.258331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:48896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.928 [2024-07-22 13:01:34.258340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:14.928 [2024-07-22 13:01:34.258352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:49024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.928 [2024-07-22 13:01:34.258360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:14.928 [2024-07-22 13:01:34.258372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:49152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.928 [2024-07-22 13:01:34.258381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:14.928 [2024-07-22 13:01:34.258392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:49280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.928 [2024-07-22 13:01:34.258401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:14.928 [2024-07-22 13:01:34.258412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:49408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.928 [2024-07-22 13:01:34.258431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:14.928 [2024-07-22 13:01:34.258443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:49536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.928 [2024-07-22 13:01:34.258452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:14.928 [2024-07-22 13:01:34.258463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:49664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.928 [2024-07-22 13:01:34.258472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:14.928 [2024-07-22 13:01:34.258483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:49792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.928 [2024-07-22 13:01:34.258493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:14.928 [2024-07-22 13:01:34.258504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:49920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.928 [2024-07-22 13:01:34.258513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:14.928 [2024-07-22 13:01:34.258530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:50048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.928 [2024-07-22 13:01:34.258539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:14.928 [2024-07-22 13:01:34.258550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:43392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.928 [2024-07-22 13:01:34.258559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:14.928 [2024-07-22 13:01:34.258570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:43648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.928 [2024-07-22 13:01:34.258579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:14.928 [2024-07-22 13:01:34.258590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:44160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.928 [2024-07-22 13:01:34.258599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:14.928 [2024-07-22 13:01:34.258610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:44416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.928 [2024-07-22 13:01:34.258619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:14.928 [2024-07-22 13:01:34.258629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:44672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.928 [2024-07-22 13:01:34.258638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:14.928 [2024-07-22 13:01:34.258650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:50176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.928 [2024-07-22 13:01:34.258664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:14.928 [2024-07-22 13:01:34.258675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:50304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.928 [2024-07-22 13:01:34.258683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:14.928 [2024-07-22 13:01:34.258694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:50432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.928 [2024-07-22 13:01:34.258703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:14.928 [2024-07-22 13:01:34.258715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:50560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.928 [2024-07-22 13:01:34.258725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:14.928 [2024-07-22 13:01:34.258736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:50688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.928 [2024-07-22 13:01:34.258745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:14.928 [2024-07-22 13:01:34.258756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:50816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.928 [2024-07-22 13:01:34.258765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:14.928 [2024-07-22 13:01:34.258776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:50944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.928 [2024-07-22 13:01:34.258785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:14.928 [2024-07-22 13:01:34.258796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:45056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.928 [2024-07-22 13:01:34.258805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:14.928 [2024-07-22 13:01:34.258816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:45312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.928 [2024-07-22 13:01:34.258825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:14.928 [2024-07-22 13:01:34.258836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:51072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.928 [2024-07-22 13:01:34.258860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:14.928 [2024-07-22 13:01:34.258871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:51200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.928 [2024-07-22 13:01:34.258880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:14.929 [2024-07-22 13:01:34.258890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:51328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.929 [2024-07-22 13:01:34.258899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:14.929 [2024-07-22 13:01:34.258910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:45440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.929 [2024-07-22 13:01:34.258919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:14.929 [2024-07-22 13:01:34.258930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:45568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.929 [2024-07-22 13:01:34.258939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:14.929 [2024-07-22 13:01:34.258949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:45696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.929 [2024-07-22 13:01:34.258958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:14.929 [2024-07-22 13:01:34.258969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:45952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.929 [2024-07-22 13:01:34.258989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:14.929 [2024-07-22 13:01:34.259000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:46208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.929 [2024-07-22 13:01:34.259013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:14.929 [2024-07-22 13:01:34.259024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:46336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.929 [2024-07-22 13:01:34.259033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:14.929 [2024-07-22 13:01:34.259043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:51456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.929 [2024-07-22 13:01:34.259052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:14.929 [2024-07-22 13:01:34.259063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:51584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.929 [2024-07-22 13:01:34.259072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:14.929 [2024-07-22 13:01:34.259083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:51712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.929 [2024-07-22 13:01:34.259092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:14.929 [2024-07-22 13:01:34.259102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:51840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.929 [2024-07-22 13:01:34.259111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:14.929 [2024-07-22 13:01:34.259122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:51968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.929 [2024-07-22 13:01:34.259131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:14.929 [2024-07-22 13:01:34.259141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:52096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.929 [2024-07-22 13:01:34.259176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:14.929 [2024-07-22 13:01:34.259188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:52224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.929 [2024-07-22 13:01:34.259197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:14.929 [2024-07-22 13:01:34.259209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:52352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.929 [2024-07-22 13:01:34.259218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:14.929 [2024-07-22 13:01:34.259229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:52480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.929 [2024-07-22 13:01:34.259238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:14.929 [2024-07-22 13:01:34.259249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:52608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.929 [2024-07-22 13:01:34.259258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:14.929 [2024-07-22 13:01:34.259269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:52736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.929 [2024-07-22 13:01:34.259279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:14.929 [2024-07-22 13:01:34.259290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:52864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.929 [2024-07-22 13:01:34.259298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:14.929 [2024-07-22 13:01:34.259309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:52992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.929 [2024-07-22 13:01:34.259318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:14.929 [2024-07-22 13:01:34.259329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:46848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.929 [2024-07-22 13:01:34.259338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:14.929 [2024-07-22 13:01:34.259349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:47104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.929 [2024-07-22 13:01:34.259363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:14.929 [2024-07-22 13:01:34.259374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:47232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.929 [2024-07-22 13:01:34.259383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:14.929 [2024-07-22 13:01:34.259395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:47360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.929 [2024-07-22 13:01:34.259404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:14.929 [2024-07-22 13:01:34.259415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:47488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.929 [2024-07-22 13:01:34.259425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:14.929 [2024-07-22 13:01:34.259436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:47744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.929 [2024-07-22 13:01:34.259447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:14.929 [2024-07-22 13:01:34.259457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:53120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.929 [2024-07-22 13:01:34.259466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:14.929 [2024-07-22 13:01:34.259477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:53248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.929 [2024-07-22 13:01:34.259486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:14.929 [2024-07-22 13:01:34.259496] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755e30 is same with the state(5) to be set 00:39:14.929 [2024-07-22 13:01:34.259559] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1755e30 was disconnected and freed. reset controller. 00:39:14.929 [2024-07-22 13:01:34.260690] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:39:14.929 13:01:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:39:14.929 13:01:34 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:39:14.929 13:01:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:39:14.929 13:01:34 -- common/autotest_common.sh@10 -- # set +x 00:39:14.929 task offset: 48000 on job bdev=Nvme0n1 fails 00:39:14.929 00:39:14.929 Latency(us) 00:39:14.929 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:14.929 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:39:14.929 Job: Nvme0n1 ended in about 0.69 seconds with error 00:39:14.929 Verification LBA range: start 0x0 length 0x400 00:39:14.929 Nvme0n1 : 0.69 3490.85 218.18 93.05 0.00 17552.64 2412.92 23712.12 00:39:14.929 =================================================================================================================== 00:39:14.929 Total : 3490.85 218.18 93.05 0.00 17552.64 2412.92 23712.12 00:39:14.929 [2024-07-22 13:01:34.262599] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:39:14.929 [2024-07-22 13:01:34.262624] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x172b7f0 (9): Bad file descriptor 00:39:14.929 13:01:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:39:14.929 13:01:34 -- target/host_management.sh@87 -- # sleep 1 00:39:14.929 [2024-07-22 13:01:34.272011] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:39:15.866 13:01:35 -- target/host_management.sh@91 -- # kill -9 82257 00:39:15.866 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (82257) - No such process 00:39:15.866 13:01:35 -- target/host_management.sh@91 -- # true 00:39:15.866 13:01:35 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:39:15.866 13:01:35 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:39:15.866 13:01:35 -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:39:15.866 13:01:35 -- nvmf/common.sh@520 -- # config=() 00:39:15.866 13:01:35 -- nvmf/common.sh@520 -- # local subsystem config 00:39:15.866 13:01:35 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:39:15.866 13:01:35 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:39:15.866 { 00:39:15.866 "params": { 00:39:15.866 "name": "Nvme$subsystem", 00:39:15.866 "trtype": "$TEST_TRANSPORT", 00:39:15.866 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:15.866 "adrfam": "ipv4", 00:39:15.866 "trsvcid": "$NVMF_PORT", 00:39:15.866 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:15.866 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:15.866 "hdgst": ${hdgst:-false}, 00:39:15.866 "ddgst": ${ddgst:-false} 00:39:15.866 }, 00:39:15.866 "method": "bdev_nvme_attach_controller" 00:39:15.866 } 00:39:15.866 EOF 00:39:15.866 )") 00:39:15.866 13:01:35 -- nvmf/common.sh@542 -- # cat 00:39:15.866 13:01:35 -- nvmf/common.sh@544 -- # jq . 00:39:15.866 13:01:35 -- nvmf/common.sh@545 -- # IFS=, 00:39:15.866 13:01:35 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:39:15.866 "params": { 00:39:15.866 "name": "Nvme0", 00:39:15.866 "trtype": "tcp", 00:39:15.866 "traddr": "10.0.0.2", 00:39:15.866 "adrfam": "ipv4", 00:39:15.866 "trsvcid": "4420", 00:39:15.866 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:15.866 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:15.866 "hdgst": false, 00:39:15.866 "ddgst": false 00:39:15.866 }, 00:39:15.866 "method": "bdev_nvme_attach_controller" 00:39:15.866 }' 00:39:16.124 [2024-07-22 13:01:35.328542] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:39:16.125 [2024-07-22 13:01:35.329113] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82313 ] 00:39:16.125 [2024-07-22 13:01:35.468482] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:16.125 [2024-07-22 13:01:35.528934] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:16.383 Running I/O for 1 seconds... 00:39:17.335 00:39:17.335 Latency(us) 00:39:17.335 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:17.335 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:39:17.335 Verification LBA range: start 0x0 length 0x400 00:39:17.335 Nvme0n1 : 1.01 3631.36 226.96 0.00 0.00 17312.46 1422.43 24427.05 00:39:17.335 =================================================================================================================== 00:39:17.335 Total : 3631.36 226.96 0.00 0.00 17312.46 1422.43 24427.05 00:39:17.618 13:01:36 -- target/host_management.sh@101 -- # stoptarget 00:39:17.618 13:01:36 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:39:17.618 13:01:36 -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:39:17.618 13:01:36 -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:39:17.618 13:01:36 -- target/host_management.sh@40 -- # nvmftestfini 00:39:17.618 13:01:36 -- nvmf/common.sh@476 -- # nvmfcleanup 00:39:17.618 13:01:36 -- nvmf/common.sh@116 -- # sync 00:39:17.618 13:01:36 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:39:17.618 13:01:36 -- nvmf/common.sh@119 -- # set +e 00:39:17.618 13:01:36 -- nvmf/common.sh@120 -- # for i in {1..20} 00:39:17.618 13:01:36 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:39:17.618 rmmod nvme_tcp 00:39:17.618 rmmod nvme_fabrics 00:39:17.618 rmmod nvme_keyring 00:39:17.618 13:01:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:39:17.877 13:01:37 -- nvmf/common.sh@123 -- # set -e 00:39:17.877 13:01:37 -- nvmf/common.sh@124 -- # return 0 00:39:17.877 13:01:37 -- nvmf/common.sh@477 -- # '[' -n 82185 ']' 00:39:17.877 13:01:37 -- nvmf/common.sh@478 -- # killprocess 82185 00:39:17.877 13:01:37 -- common/autotest_common.sh@926 -- # '[' -z 82185 ']' 00:39:17.877 13:01:37 -- common/autotest_common.sh@930 -- # kill -0 82185 00:39:17.877 13:01:37 -- common/autotest_common.sh@931 -- # uname 00:39:17.877 13:01:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:39:17.877 13:01:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 82185 00:39:17.877 13:01:37 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:39:17.877 13:01:37 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:39:17.877 killing process with pid 82185 00:39:17.877 13:01:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 82185' 00:39:17.877 13:01:37 -- common/autotest_common.sh@945 -- # kill 82185 00:39:17.877 13:01:37 -- common/autotest_common.sh@950 -- # wait 82185 00:39:17.877 [2024-07-22 13:01:37.261487] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:39:17.877 13:01:37 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:39:17.877 13:01:37 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:39:17.877 13:01:37 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:39:17.877 13:01:37 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:39:17.877 13:01:37 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:39:17.877 13:01:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:17.877 13:01:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:39:17.877 13:01:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:18.137 13:01:37 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:39:18.137 00:39:18.137 real 0m5.294s 00:39:18.137 user 0m22.247s 00:39:18.137 sys 0m1.273s 00:39:18.137 13:01:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:18.137 13:01:37 -- common/autotest_common.sh@10 -- # set +x 00:39:18.137 ************************************ 00:39:18.137 END TEST nvmf_host_management 00:39:18.137 ************************************ 00:39:18.137 13:01:37 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:39:18.137 00:39:18.137 real 0m5.779s 00:39:18.137 user 0m22.365s 00:39:18.137 sys 0m1.510s 00:39:18.137 13:01:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:18.137 13:01:37 -- common/autotest_common.sh@10 -- # set +x 00:39:18.137 ************************************ 00:39:18.137 END TEST nvmf_host_management 00:39:18.137 ************************************ 00:39:18.137 13:01:37 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:39:18.137 13:01:37 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:39:18.137 13:01:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:39:18.137 13:01:37 -- common/autotest_common.sh@10 -- # set +x 00:39:18.137 ************************************ 00:39:18.137 START TEST nvmf_lvol 00:39:18.137 ************************************ 00:39:18.137 13:01:37 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:39:18.137 * Looking for test storage... 00:39:18.137 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:39:18.137 13:01:37 -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:39:18.137 13:01:37 -- nvmf/common.sh@7 -- # uname -s 00:39:18.137 13:01:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:18.137 13:01:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:18.137 13:01:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:18.137 13:01:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:18.137 13:01:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:18.137 13:01:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:18.137 13:01:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:18.137 13:01:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:18.137 13:01:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:18.137 13:01:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:18.137 13:01:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:864ae4a3-cd96-439b-8ef2-6dab4d992115 00:39:18.137 13:01:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=864ae4a3-cd96-439b-8ef2-6dab4d992115 00:39:18.137 13:01:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:18.137 13:01:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:18.137 13:01:37 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:39:18.137 13:01:37 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:39:18.137 13:01:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:18.137 13:01:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:18.137 13:01:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:18.137 13:01:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:18.137 13:01:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:18.137 13:01:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:18.137 13:01:37 -- paths/export.sh@5 -- # export PATH 00:39:18.137 13:01:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:18.137 13:01:37 -- nvmf/common.sh@46 -- # : 0 00:39:18.137 13:01:37 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:39:18.137 13:01:37 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:39:18.137 13:01:37 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:39:18.137 13:01:37 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:18.137 13:01:37 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:18.137 13:01:37 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:39:18.137 13:01:37 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:39:18.137 13:01:37 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:39:18.137 13:01:37 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:18.137 13:01:37 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:18.137 13:01:37 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:39:18.137 13:01:37 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:39:18.137 13:01:37 -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:39:18.137 13:01:37 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:39:18.137 13:01:37 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:39:18.137 13:01:37 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:18.137 13:01:37 -- nvmf/common.sh@436 -- # prepare_net_devs 00:39:18.137 13:01:37 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:39:18.137 13:01:37 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:39:18.137 13:01:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:18.138 13:01:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:39:18.138 13:01:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:18.138 13:01:37 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:39:18.138 13:01:37 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:39:18.138 13:01:37 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:39:18.138 13:01:37 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:39:18.138 13:01:37 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:39:18.138 13:01:37 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:39:18.138 13:01:37 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:18.138 13:01:37 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:18.138 13:01:37 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:39:18.138 13:01:37 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:39:18.138 13:01:37 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:39:18.138 13:01:37 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:39:18.138 13:01:37 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:39:18.138 13:01:37 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:18.138 13:01:37 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:39:18.138 13:01:37 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:39:18.138 13:01:37 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:39:18.138 13:01:37 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:39:18.138 13:01:37 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:39:18.138 13:01:37 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:39:18.138 Cannot find device "nvmf_tgt_br" 00:39:18.138 13:01:37 -- nvmf/common.sh@154 -- # true 00:39:18.138 13:01:37 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:39:18.138 Cannot find device "nvmf_tgt_br2" 00:39:18.138 13:01:37 -- nvmf/common.sh@155 -- # true 00:39:18.138 13:01:37 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:39:18.138 13:01:37 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:39:18.138 Cannot find device "nvmf_tgt_br" 00:39:18.138 13:01:37 -- nvmf/common.sh@157 -- # true 00:39:18.138 13:01:37 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:39:18.397 Cannot find device "nvmf_tgt_br2" 00:39:18.397 13:01:37 -- nvmf/common.sh@158 -- # true 00:39:18.397 13:01:37 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:39:18.397 13:01:37 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:39:18.397 13:01:37 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:39:18.397 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:39:18.397 13:01:37 -- nvmf/common.sh@161 -- # true 00:39:18.397 13:01:37 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:39:18.397 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:39:18.397 13:01:37 -- nvmf/common.sh@162 -- # true 00:39:18.397 13:01:37 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:39:18.397 13:01:37 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:39:18.397 13:01:37 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:39:18.397 13:01:37 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:39:18.397 13:01:37 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:39:18.397 13:01:37 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:39:18.397 13:01:37 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:39:18.397 13:01:37 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:39:18.397 13:01:37 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:39:18.397 13:01:37 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:39:18.397 13:01:37 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:39:18.397 13:01:37 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:39:18.397 13:01:37 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:39:18.397 13:01:37 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:39:18.397 13:01:37 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:39:18.397 13:01:37 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:39:18.397 13:01:37 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:39:18.397 13:01:37 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:39:18.397 13:01:37 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:39:18.397 13:01:37 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:39:18.397 13:01:37 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:39:18.397 13:01:37 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:39:18.397 13:01:37 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:39:18.397 13:01:37 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:39:18.397 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:18.397 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:39:18.397 00:39:18.397 --- 10.0.0.2 ping statistics --- 00:39:18.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:18.397 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:39:18.397 13:01:37 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:39:18.397 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:39:18.397 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.034 ms 00:39:18.397 00:39:18.397 --- 10.0.0.3 ping statistics --- 00:39:18.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:18.397 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:39:18.397 13:01:37 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:39:18.397 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:18.397 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:39:18.397 00:39:18.397 --- 10.0.0.1 ping statistics --- 00:39:18.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:18.397 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:39:18.397 13:01:37 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:18.397 13:01:37 -- nvmf/common.sh@421 -- # return 0 00:39:18.397 13:01:37 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:39:18.397 13:01:37 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:18.397 13:01:37 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:39:18.397 13:01:37 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:39:18.397 13:01:37 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:18.397 13:01:37 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:39:18.397 13:01:37 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:39:18.397 13:01:37 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:39:18.397 13:01:37 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:39:18.397 13:01:37 -- common/autotest_common.sh@712 -- # xtrace_disable 00:39:18.397 13:01:37 -- common/autotest_common.sh@10 -- # set +x 00:39:18.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:18.656 13:01:37 -- nvmf/common.sh@469 -- # nvmfpid=82531 00:39:18.656 13:01:37 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:39:18.656 13:01:37 -- nvmf/common.sh@470 -- # waitforlisten 82531 00:39:18.656 13:01:37 -- common/autotest_common.sh@819 -- # '[' -z 82531 ']' 00:39:18.656 13:01:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:18.656 13:01:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:39:18.656 13:01:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:18.656 13:01:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:39:18.656 13:01:37 -- common/autotest_common.sh@10 -- # set +x 00:39:18.656 [2024-07-22 13:01:37.874480] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:39:18.656 [2024-07-22 13:01:37.874737] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:18.656 [2024-07-22 13:01:38.017876] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:39:18.915 [2024-07-22 13:01:38.099457] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:39:18.915 [2024-07-22 13:01:38.099840] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:18.915 [2024-07-22 13:01:38.100010] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:18.915 [2024-07-22 13:01:38.100294] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:18.915 [2024-07-22 13:01:38.100503] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:39:18.915 [2024-07-22 13:01:38.100674] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:39:18.915 [2024-07-22 13:01:38.100678] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:19.482 13:01:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:39:19.482 13:01:38 -- common/autotest_common.sh@852 -- # return 0 00:39:19.482 13:01:38 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:39:19.482 13:01:38 -- common/autotest_common.sh@718 -- # xtrace_disable 00:39:19.482 13:01:38 -- common/autotest_common.sh@10 -- # set +x 00:39:19.482 13:01:38 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:19.482 13:01:38 -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:39:19.741 [2024-07-22 13:01:39.123963] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:19.741 13:01:39 -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:20.308 13:01:39 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:39:20.308 13:01:39 -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:20.567 13:01:39 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:39:20.567 13:01:39 -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:39:20.826 13:01:40 -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:39:21.084 13:01:40 -- target/nvmf_lvol.sh@29 -- # lvs=e65463d1-4f6e-4b58-bd61-794594319cd5 00:39:21.084 13:01:40 -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u e65463d1-4f6e-4b58-bd61-794594319cd5 lvol 20 00:39:21.084 13:01:40 -- target/nvmf_lvol.sh@32 -- # lvol=96c07ed8-a829-41da-86be-9df6fd6ddaa7 00:39:21.084 13:01:40 -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:39:21.342 13:01:40 -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 96c07ed8-a829-41da-86be-9df6fd6ddaa7 00:39:21.600 13:01:40 -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:21.858 [2024-07-22 13:01:41.141569] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:21.858 13:01:41 -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:22.116 13:01:41 -- target/nvmf_lvol.sh@42 -- # perf_pid=82684 00:39:22.116 13:01:41 -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:39:22.116 13:01:41 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:39:23.050 13:01:42 -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 96c07ed8-a829-41da-86be-9df6fd6ddaa7 MY_SNAPSHOT 00:39:23.615 13:01:42 -- target/nvmf_lvol.sh@47 -- # snapshot=d61c67a8-84bc-479d-8345-cd402c9073a1 00:39:23.615 13:01:42 -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 96c07ed8-a829-41da-86be-9df6fd6ddaa7 30 00:39:23.871 13:01:43 -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone d61c67a8-84bc-479d-8345-cd402c9073a1 MY_CLONE 00:39:24.128 13:01:43 -- target/nvmf_lvol.sh@49 -- # clone=934af71f-3e4e-466e-a7bc-cc158c22213f 00:39:24.128 13:01:43 -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 934af71f-3e4e-466e-a7bc-cc158c22213f 00:39:24.695 13:01:43 -- target/nvmf_lvol.sh@53 -- # wait 82684 00:39:32.805 Initializing NVMe Controllers 00:39:32.805 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:39:32.805 Controller IO queue size 128, less than required. 00:39:32.805 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:39:32.805 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:39:32.805 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:39:32.805 Initialization complete. Launching workers. 00:39:32.805 ======================================================== 00:39:32.805 Latency(us) 00:39:32.805 Device Information : IOPS MiB/s Average min max 00:39:32.805 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10819.10 42.26 11834.03 2456.87 71777.58 00:39:32.805 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10851.40 42.39 11803.45 304.05 92464.73 00:39:32.805 ======================================================== 00:39:32.805 Total : 21670.50 84.65 11818.72 304.05 92464.73 00:39:32.805 00:39:32.805 13:01:51 -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:32.805 13:01:52 -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 96c07ed8-a829-41da-86be-9df6fd6ddaa7 00:39:33.063 13:01:52 -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e65463d1-4f6e-4b58-bd61-794594319cd5 00:39:33.336 13:01:52 -- target/nvmf_lvol.sh@60 -- # rm -f 00:39:33.336 13:01:52 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:39:33.336 13:01:52 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:39:33.336 13:01:52 -- nvmf/common.sh@476 -- # nvmfcleanup 00:39:33.336 13:01:52 -- nvmf/common.sh@116 -- # sync 00:39:33.336 13:01:52 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:39:33.336 13:01:52 -- nvmf/common.sh@119 -- # set +e 00:39:33.336 13:01:52 -- nvmf/common.sh@120 -- # for i in {1..20} 00:39:33.336 13:01:52 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:39:33.336 rmmod nvme_tcp 00:39:33.336 rmmod nvme_fabrics 00:39:33.336 rmmod nvme_keyring 00:39:33.336 13:01:52 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:39:33.336 13:01:52 -- nvmf/common.sh@123 -- # set -e 00:39:33.336 13:01:52 -- nvmf/common.sh@124 -- # return 0 00:39:33.336 13:01:52 -- nvmf/common.sh@477 -- # '[' -n 82531 ']' 00:39:33.336 13:01:52 -- nvmf/common.sh@478 -- # killprocess 82531 00:39:33.336 13:01:52 -- common/autotest_common.sh@926 -- # '[' -z 82531 ']' 00:39:33.336 13:01:52 -- common/autotest_common.sh@930 -- # kill -0 82531 00:39:33.336 13:01:52 -- common/autotest_common.sh@931 -- # uname 00:39:33.336 13:01:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:39:33.336 13:01:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 82531 00:39:33.336 killing process with pid 82531 00:39:33.336 13:01:52 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:39:33.336 13:01:52 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:39:33.336 13:01:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 82531' 00:39:33.336 13:01:52 -- common/autotest_common.sh@945 -- # kill 82531 00:39:33.336 13:01:52 -- common/autotest_common.sh@950 -- # wait 82531 00:39:33.641 13:01:52 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:39:33.641 13:01:52 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:39:33.641 13:01:52 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:39:33.641 13:01:52 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:39:33.641 13:01:52 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:39:33.641 13:01:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:33.641 13:01:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:39:33.641 13:01:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:33.641 13:01:52 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:39:33.641 ************************************ 00:39:33.641 END TEST nvmf_lvol 00:39:33.641 ************************************ 00:39:33.641 00:39:33.641 real 0m15.527s 00:39:33.641 user 1m5.240s 00:39:33.641 sys 0m3.822s 00:39:33.641 13:01:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:33.641 13:01:52 -- common/autotest_common.sh@10 -- # set +x 00:39:33.641 13:01:52 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:39:33.641 13:01:52 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:39:33.641 13:01:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:39:33.641 13:01:52 -- common/autotest_common.sh@10 -- # set +x 00:39:33.641 ************************************ 00:39:33.641 START TEST nvmf_lvs_grow 00:39:33.641 ************************************ 00:39:33.641 13:01:52 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:39:33.641 * Looking for test storage... 00:39:33.641 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:39:33.641 13:01:53 -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:39:33.641 13:01:53 -- nvmf/common.sh@7 -- # uname -s 00:39:33.900 13:01:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:33.900 13:01:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:33.900 13:01:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:33.900 13:01:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:33.900 13:01:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:33.900 13:01:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:33.900 13:01:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:33.900 13:01:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:33.900 13:01:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:33.900 13:01:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:33.900 13:01:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:864ae4a3-cd96-439b-8ef2-6dab4d992115 00:39:33.900 13:01:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=864ae4a3-cd96-439b-8ef2-6dab4d992115 00:39:33.900 13:01:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:33.900 13:01:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:33.900 13:01:53 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:39:33.900 13:01:53 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:39:33.900 13:01:53 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:33.900 13:01:53 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:33.900 13:01:53 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:33.900 13:01:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:33.900 13:01:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:33.901 13:01:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:33.901 13:01:53 -- paths/export.sh@5 -- # export PATH 00:39:33.901 13:01:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:33.901 13:01:53 -- nvmf/common.sh@46 -- # : 0 00:39:33.901 13:01:53 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:39:33.901 13:01:53 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:39:33.901 13:01:53 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:39:33.901 13:01:53 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:33.901 13:01:53 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:33.901 13:01:53 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:39:33.901 13:01:53 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:39:33.901 13:01:53 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:39:33.901 13:01:53 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:39:33.901 13:01:53 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:39:33.901 13:01:53 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:39:33.901 13:01:53 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:39:33.901 13:01:53 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:33.901 13:01:53 -- nvmf/common.sh@436 -- # prepare_net_devs 00:39:33.901 13:01:53 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:39:33.901 13:01:53 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:39:33.901 13:01:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:33.901 13:01:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:39:33.901 13:01:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:33.901 13:01:53 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:39:33.901 13:01:53 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:39:33.901 13:01:53 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:39:33.901 13:01:53 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:39:33.901 13:01:53 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:39:33.901 13:01:53 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:39:33.901 13:01:53 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:33.901 13:01:53 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:33.901 13:01:53 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:39:33.901 13:01:53 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:39:33.901 13:01:53 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:39:33.901 13:01:53 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:39:33.901 13:01:53 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:39:33.901 13:01:53 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:33.901 13:01:53 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:39:33.901 13:01:53 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:39:33.901 13:01:53 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:39:33.901 13:01:53 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:39:33.901 13:01:53 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:39:33.901 13:01:53 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:39:33.901 Cannot find device "nvmf_tgt_br" 00:39:33.901 13:01:53 -- nvmf/common.sh@154 -- # true 00:39:33.901 13:01:53 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:39:33.901 Cannot find device "nvmf_tgt_br2" 00:39:33.901 13:01:53 -- nvmf/common.sh@155 -- # true 00:39:33.901 13:01:53 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:39:33.901 13:01:53 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:39:33.901 Cannot find device "nvmf_tgt_br" 00:39:33.901 13:01:53 -- nvmf/common.sh@157 -- # true 00:39:33.901 13:01:53 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:39:33.901 Cannot find device "nvmf_tgt_br2" 00:39:33.901 13:01:53 -- nvmf/common.sh@158 -- # true 00:39:33.901 13:01:53 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:39:33.901 13:01:53 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:39:33.901 13:01:53 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:39:33.901 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:39:33.901 13:01:53 -- nvmf/common.sh@161 -- # true 00:39:33.901 13:01:53 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:39:33.901 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:39:33.901 13:01:53 -- nvmf/common.sh@162 -- # true 00:39:33.901 13:01:53 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:39:33.901 13:01:53 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:39:33.901 13:01:53 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:39:33.901 13:01:53 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:39:33.901 13:01:53 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:39:33.901 13:01:53 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:39:33.901 13:01:53 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:39:33.901 13:01:53 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:39:33.901 13:01:53 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:39:33.901 13:01:53 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:39:33.901 13:01:53 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:39:33.901 13:01:53 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:39:33.901 13:01:53 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:39:33.901 13:01:53 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:39:33.901 13:01:53 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:39:33.901 13:01:53 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:39:34.160 13:01:53 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:39:34.160 13:01:53 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:39:34.160 13:01:53 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:39:34.160 13:01:53 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:39:34.160 13:01:53 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:39:34.160 13:01:53 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:39:34.160 13:01:53 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:39:34.160 13:01:53 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:39:34.160 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:34.160 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:39:34.160 00:39:34.160 --- 10.0.0.2 ping statistics --- 00:39:34.160 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:34.160 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:39:34.160 13:01:53 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:39:34.160 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:39:34.160 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:39:34.160 00:39:34.160 --- 10.0.0.3 ping statistics --- 00:39:34.160 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:34.160 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:39:34.160 13:01:53 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:39:34.160 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:34.160 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.052 ms 00:39:34.160 00:39:34.160 --- 10.0.0.1 ping statistics --- 00:39:34.160 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:34.160 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:39:34.160 13:01:53 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:34.160 13:01:53 -- nvmf/common.sh@421 -- # return 0 00:39:34.160 13:01:53 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:39:34.160 13:01:53 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:34.160 13:01:53 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:39:34.160 13:01:53 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:39:34.160 13:01:53 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:34.160 13:01:53 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:39:34.160 13:01:53 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:39:34.160 13:01:53 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:39:34.160 13:01:53 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:39:34.160 13:01:53 -- common/autotest_common.sh@712 -- # xtrace_disable 00:39:34.160 13:01:53 -- common/autotest_common.sh@10 -- # set +x 00:39:34.160 13:01:53 -- nvmf/common.sh@469 -- # nvmfpid=83036 00:39:34.160 13:01:53 -- nvmf/common.sh@470 -- # waitforlisten 83036 00:39:34.160 13:01:53 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:39:34.160 13:01:53 -- common/autotest_common.sh@819 -- # '[' -z 83036 ']' 00:39:34.160 13:01:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:34.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:34.160 13:01:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:39:34.160 13:01:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:34.160 13:01:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:39:34.160 13:01:53 -- common/autotest_common.sh@10 -- # set +x 00:39:34.160 [2024-07-22 13:01:53.490098] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:39:34.160 [2024-07-22 13:01:53.490375] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:34.437 [2024-07-22 13:01:53.619769] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:34.437 [2024-07-22 13:01:53.707131] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:39:34.437 [2024-07-22 13:01:53.707293] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:34.437 [2024-07-22 13:01:53.707309] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:34.437 [2024-07-22 13:01:53.707319] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:34.437 [2024-07-22 13:01:53.707350] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:35.370 13:01:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:39:35.370 13:01:54 -- common/autotest_common.sh@852 -- # return 0 00:39:35.370 13:01:54 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:39:35.370 13:01:54 -- common/autotest_common.sh@718 -- # xtrace_disable 00:39:35.370 13:01:54 -- common/autotest_common.sh@10 -- # set +x 00:39:35.370 13:01:54 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:35.370 13:01:54 -- target/nvmf_lvs_grow.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:39:35.370 [2024-07-22 13:01:54.764406] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:35.370 13:01:54 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:39:35.370 13:01:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:39:35.370 13:01:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:39:35.370 13:01:54 -- common/autotest_common.sh@10 -- # set +x 00:39:35.628 ************************************ 00:39:35.628 START TEST lvs_grow_clean 00:39:35.628 ************************************ 00:39:35.628 13:01:54 -- common/autotest_common.sh@1104 -- # lvs_grow 00:39:35.628 13:01:54 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:39:35.628 13:01:54 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:39:35.628 13:01:54 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:39:35.628 13:01:54 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:39:35.628 13:01:54 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:39:35.628 13:01:54 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:39:35.628 13:01:54 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:39:35.628 13:01:54 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:39:35.628 13:01:54 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:39:35.886 13:01:55 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:39:35.886 13:01:55 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:39:36.144 13:01:55 -- target/nvmf_lvs_grow.sh@28 -- # lvs=ad34b6e1-c7a3-4182-8458-ee053b6938b9 00:39:36.144 13:01:55 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ad34b6e1-c7a3-4182-8458-ee053b6938b9 00:39:36.144 13:01:55 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:39:36.144 13:01:55 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:39:36.144 13:01:55 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:39:36.144 13:01:55 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ad34b6e1-c7a3-4182-8458-ee053b6938b9 lvol 150 00:39:36.402 13:01:55 -- target/nvmf_lvs_grow.sh@33 -- # lvol=9cdb02cc-a0e8-4fd8-b984-1069894baf66 00:39:36.402 13:01:55 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:39:36.402 13:01:55 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:39:36.660 [2024-07-22 13:01:56.052003] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:39:36.660 [2024-07-22 13:01:56.052096] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:39:36.660 true 00:39:36.660 13:01:56 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ad34b6e1-c7a3-4182-8458-ee053b6938b9 00:39:36.660 13:01:56 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:39:37.226 13:01:56 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:39:37.226 13:01:56 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:39:37.226 13:01:56 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 9cdb02cc-a0e8-4fd8-b984-1069894baf66 00:39:37.484 13:01:56 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:37.743 [2024-07-22 13:01:56.988619] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:37.743 13:01:57 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:38.001 13:01:57 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=83207 00:39:38.001 13:01:57 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:39:38.001 13:01:57 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:39:38.001 13:01:57 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 83207 /var/tmp/bdevperf.sock 00:39:38.001 13:01:57 -- common/autotest_common.sh@819 -- # '[' -z 83207 ']' 00:39:38.001 13:01:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:39:38.001 13:01:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:39:38.001 13:01:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:39:38.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:39:38.001 13:01:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:39:38.001 13:01:57 -- common/autotest_common.sh@10 -- # set +x 00:39:38.002 [2024-07-22 13:01:57.276542] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:39:38.002 [2024-07-22 13:01:57.276677] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83207 ] 00:39:38.002 [2024-07-22 13:01:57.415796] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:38.259 [2024-07-22 13:01:57.501273] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:39:38.825 13:01:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:39:38.825 13:01:58 -- common/autotest_common.sh@852 -- # return 0 00:39:38.825 13:01:58 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:39:39.392 Nvme0n1 00:39:39.392 13:01:58 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:39:39.392 [ 00:39:39.392 { 00:39:39.392 "aliases": [ 00:39:39.392 "9cdb02cc-a0e8-4fd8-b984-1069894baf66" 00:39:39.392 ], 00:39:39.392 "assigned_rate_limits": { 00:39:39.392 "r_mbytes_per_sec": 0, 00:39:39.392 "rw_ios_per_sec": 0, 00:39:39.392 "rw_mbytes_per_sec": 0, 00:39:39.392 "w_mbytes_per_sec": 0 00:39:39.392 }, 00:39:39.392 "block_size": 4096, 00:39:39.392 "claimed": false, 00:39:39.392 "driver_specific": { 00:39:39.392 "mp_policy": "active_passive", 00:39:39.392 "nvme": [ 00:39:39.392 { 00:39:39.392 "ctrlr_data": { 00:39:39.392 "ana_reporting": false, 00:39:39.392 "cntlid": 1, 00:39:39.392 "firmware_revision": "24.01.1", 00:39:39.392 "model_number": "SPDK bdev Controller", 00:39:39.392 "multi_ctrlr": true, 00:39:39.392 "oacs": { 00:39:39.392 "firmware": 0, 00:39:39.392 "format": 0, 00:39:39.392 "ns_manage": 0, 00:39:39.392 "security": 0 00:39:39.392 }, 00:39:39.392 "serial_number": "SPDK0", 00:39:39.392 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:39.392 "vendor_id": "0x8086" 00:39:39.392 }, 00:39:39.392 "ns_data": { 00:39:39.392 "can_share": true, 00:39:39.392 "id": 1 00:39:39.392 }, 00:39:39.392 "trid": { 00:39:39.392 "adrfam": "IPv4", 00:39:39.392 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:39.392 "traddr": "10.0.0.2", 00:39:39.392 "trsvcid": "4420", 00:39:39.392 "trtype": "TCP" 00:39:39.392 }, 00:39:39.392 "vs": { 00:39:39.392 "nvme_version": "1.3" 00:39:39.392 } 00:39:39.392 } 00:39:39.392 ] 00:39:39.392 }, 00:39:39.392 "name": "Nvme0n1", 00:39:39.392 "num_blocks": 38912, 00:39:39.392 "product_name": "NVMe disk", 00:39:39.392 "supported_io_types": { 00:39:39.392 "abort": true, 00:39:39.392 "compare": true, 00:39:39.392 "compare_and_write": true, 00:39:39.392 "flush": true, 00:39:39.392 "nvme_admin": true, 00:39:39.392 "nvme_io": true, 00:39:39.392 "read": true, 00:39:39.392 "reset": true, 00:39:39.392 "unmap": true, 00:39:39.392 "write": true, 00:39:39.392 "write_zeroes": true 00:39:39.392 }, 00:39:39.392 "uuid": "9cdb02cc-a0e8-4fd8-b984-1069894baf66", 00:39:39.392 "zoned": false 00:39:39.392 } 00:39:39.392 ] 00:39:39.392 13:01:58 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:39:39.392 13:01:58 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=83255 00:39:39.392 13:01:58 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:39:39.650 Running I/O for 10 seconds... 00:39:40.590 Latency(us) 00:39:40.590 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:40.590 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:40.590 Nvme0n1 : 1.00 8460.00 33.05 0.00 0.00 0.00 0.00 0.00 00:39:40.590 =================================================================================================================== 00:39:40.590 Total : 8460.00 33.05 0.00 0.00 0.00 0.00 0.00 00:39:40.590 00:39:41.542 13:02:00 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u ad34b6e1-c7a3-4182-8458-ee053b6938b9 00:39:41.542 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:41.542 Nvme0n1 : 2.00 8404.50 32.83 0.00 0.00 0.00 0.00 0.00 00:39:41.542 =================================================================================================================== 00:39:41.542 Total : 8404.50 32.83 0.00 0.00 0.00 0.00 0.00 00:39:41.542 00:39:41.801 true 00:39:41.801 13:02:01 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ad34b6e1-c7a3-4182-8458-ee053b6938b9 00:39:41.801 13:02:01 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:39:42.058 13:02:01 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:39:42.058 13:02:01 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:39:42.058 13:02:01 -- target/nvmf_lvs_grow.sh@65 -- # wait 83255 00:39:42.624 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:42.624 Nvme0n1 : 3.00 8403.00 32.82 0.00 0.00 0.00 0.00 0.00 00:39:42.624 =================================================================================================================== 00:39:42.624 Total : 8403.00 32.82 0.00 0.00 0.00 0.00 0.00 00:39:42.624 00:39:43.557 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:43.557 Nvme0n1 : 4.00 8381.50 32.74 0.00 0.00 0.00 0.00 0.00 00:39:43.557 =================================================================================================================== 00:39:43.557 Total : 8381.50 32.74 0.00 0.00 0.00 0.00 0.00 00:39:43.557 00:39:44.500 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:44.500 Nvme0n1 : 5.00 8361.40 32.66 0.00 0.00 0.00 0.00 0.00 00:39:44.500 =================================================================================================================== 00:39:44.500 Total : 8361.40 32.66 0.00 0.00 0.00 0.00 0.00 00:39:44.500 00:39:45.875 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:45.875 Nvme0n1 : 6.00 8339.33 32.58 0.00 0.00 0.00 0.00 0.00 00:39:45.875 =================================================================================================================== 00:39:45.875 Total : 8339.33 32.58 0.00 0.00 0.00 0.00 0.00 00:39:45.875 00:39:46.810 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:46.810 Nvme0n1 : 7.00 8317.29 32.49 0.00 0.00 0.00 0.00 0.00 00:39:46.810 =================================================================================================================== 00:39:46.810 Total : 8317.29 32.49 0.00 0.00 0.00 0.00 0.00 00:39:46.810 00:39:47.747 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:47.747 Nvme0n1 : 8.00 8307.62 32.45 0.00 0.00 0.00 0.00 0.00 00:39:47.747 =================================================================================================================== 00:39:47.747 Total : 8307.62 32.45 0.00 0.00 0.00 0.00 0.00 00:39:47.747 00:39:48.683 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:48.683 Nvme0n1 : 9.00 8287.78 32.37 0.00 0.00 0.00 0.00 0.00 00:39:48.683 =================================================================================================================== 00:39:48.683 Total : 8287.78 32.37 0.00 0.00 0.00 0.00 0.00 00:39:48.683 00:39:49.619 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:49.619 Nvme0n1 : 10.00 8274.60 32.32 0.00 0.00 0.00 0.00 0.00 00:39:49.619 =================================================================================================================== 00:39:49.619 Total : 8274.60 32.32 0.00 0.00 0.00 0.00 0.00 00:39:49.619 00:39:49.619 00:39:49.619 Latency(us) 00:39:49.619 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:49.619 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:49.619 Nvme0n1 : 10.02 8274.82 32.32 0.00 0.00 15463.62 7357.91 33602.09 00:39:49.619 =================================================================================================================== 00:39:49.619 Total : 8274.82 32.32 0.00 0.00 15463.62 7357.91 33602.09 00:39:49.619 0 00:39:49.619 13:02:08 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 83207 00:39:49.619 13:02:08 -- common/autotest_common.sh@926 -- # '[' -z 83207 ']' 00:39:49.619 13:02:08 -- common/autotest_common.sh@930 -- # kill -0 83207 00:39:49.619 13:02:08 -- common/autotest_common.sh@931 -- # uname 00:39:49.619 13:02:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:39:49.619 13:02:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 83207 00:39:49.619 13:02:08 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:39:49.619 13:02:08 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:39:49.619 killing process with pid 83207 00:39:49.619 13:02:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 83207' 00:39:49.619 13:02:08 -- common/autotest_common.sh@945 -- # kill 83207 00:39:49.619 Received shutdown signal, test time was about 10.000000 seconds 00:39:49.619 00:39:49.619 Latency(us) 00:39:49.619 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:49.619 =================================================================================================================== 00:39:49.619 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:49.619 13:02:08 -- common/autotest_common.sh@950 -- # wait 83207 00:39:49.879 13:02:09 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:50.138 13:02:09 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ad34b6e1-c7a3-4182-8458-ee053b6938b9 00:39:50.138 13:02:09 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:39:50.397 13:02:09 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:39:50.397 13:02:09 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:39:50.397 13:02:09 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:39:50.655 [2024-07-22 13:02:09.925862] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:39:50.655 13:02:09 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ad34b6e1-c7a3-4182-8458-ee053b6938b9 00:39:50.655 13:02:09 -- common/autotest_common.sh@640 -- # local es=0 00:39:50.655 13:02:09 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ad34b6e1-c7a3-4182-8458-ee053b6938b9 00:39:50.655 13:02:09 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:39:50.655 13:02:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:39:50.655 13:02:09 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:39:50.655 13:02:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:39:50.655 13:02:09 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:39:50.655 13:02:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:39:50.655 13:02:09 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:39:50.655 13:02:09 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:39:50.655 13:02:09 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ad34b6e1-c7a3-4182-8458-ee053b6938b9 00:39:50.913 2024/07/22 13:02:10 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:ad34b6e1-c7a3-4182-8458-ee053b6938b9], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:39:50.913 request: 00:39:50.913 { 00:39:50.913 "method": "bdev_lvol_get_lvstores", 00:39:50.913 "params": { 00:39:50.913 "uuid": "ad34b6e1-c7a3-4182-8458-ee053b6938b9" 00:39:50.913 } 00:39:50.913 } 00:39:50.913 Got JSON-RPC error response 00:39:50.913 GoRPCClient: error on JSON-RPC call 00:39:50.913 13:02:10 -- common/autotest_common.sh@643 -- # es=1 00:39:50.913 13:02:10 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:39:50.913 13:02:10 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:39:50.913 13:02:10 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:39:50.913 13:02:10 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:39:51.172 aio_bdev 00:39:51.172 13:02:10 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 9cdb02cc-a0e8-4fd8-b984-1069894baf66 00:39:51.172 13:02:10 -- common/autotest_common.sh@887 -- # local bdev_name=9cdb02cc-a0e8-4fd8-b984-1069894baf66 00:39:51.172 13:02:10 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:39:51.172 13:02:10 -- common/autotest_common.sh@889 -- # local i 00:39:51.172 13:02:10 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:39:51.172 13:02:10 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:39:51.172 13:02:10 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:39:51.431 13:02:10 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9cdb02cc-a0e8-4fd8-b984-1069894baf66 -t 2000 00:39:51.689 [ 00:39:51.689 { 00:39:51.689 "aliases": [ 00:39:51.689 "lvs/lvol" 00:39:51.689 ], 00:39:51.689 "assigned_rate_limits": { 00:39:51.689 "r_mbytes_per_sec": 0, 00:39:51.689 "rw_ios_per_sec": 0, 00:39:51.689 "rw_mbytes_per_sec": 0, 00:39:51.689 "w_mbytes_per_sec": 0 00:39:51.689 }, 00:39:51.689 "block_size": 4096, 00:39:51.689 "claimed": false, 00:39:51.689 "driver_specific": { 00:39:51.689 "lvol": { 00:39:51.689 "base_bdev": "aio_bdev", 00:39:51.689 "clone": false, 00:39:51.689 "esnap_clone": false, 00:39:51.689 "lvol_store_uuid": "ad34b6e1-c7a3-4182-8458-ee053b6938b9", 00:39:51.689 "snapshot": false, 00:39:51.689 "thin_provision": false 00:39:51.689 } 00:39:51.689 }, 00:39:51.689 "name": "9cdb02cc-a0e8-4fd8-b984-1069894baf66", 00:39:51.689 "num_blocks": 38912, 00:39:51.689 "product_name": "Logical Volume", 00:39:51.689 "supported_io_types": { 00:39:51.690 "abort": false, 00:39:51.690 "compare": false, 00:39:51.690 "compare_and_write": false, 00:39:51.690 "flush": false, 00:39:51.690 "nvme_admin": false, 00:39:51.690 "nvme_io": false, 00:39:51.690 "read": true, 00:39:51.690 "reset": true, 00:39:51.690 "unmap": true, 00:39:51.690 "write": true, 00:39:51.690 "write_zeroes": true 00:39:51.690 }, 00:39:51.690 "uuid": "9cdb02cc-a0e8-4fd8-b984-1069894baf66", 00:39:51.690 "zoned": false 00:39:51.690 } 00:39:51.690 ] 00:39:51.690 13:02:10 -- common/autotest_common.sh@895 -- # return 0 00:39:51.690 13:02:10 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:39:51.690 13:02:10 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ad34b6e1-c7a3-4182-8458-ee053b6938b9 00:39:51.948 13:02:11 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:39:51.948 13:02:11 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:39:51.948 13:02:11 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ad34b6e1-c7a3-4182-8458-ee053b6938b9 00:39:52.207 13:02:11 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:39:52.207 13:02:11 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 9cdb02cc-a0e8-4fd8-b984-1069894baf66 00:39:52.466 13:02:11 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ad34b6e1-c7a3-4182-8458-ee053b6938b9 00:39:52.724 13:02:12 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:39:52.983 13:02:12 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:39:53.565 ************************************ 00:39:53.565 END TEST lvs_grow_clean 00:39:53.565 ************************************ 00:39:53.565 00:39:53.565 real 0m17.868s 00:39:53.565 user 0m17.233s 00:39:53.565 sys 0m2.078s 00:39:53.565 13:02:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:53.565 13:02:12 -- common/autotest_common.sh@10 -- # set +x 00:39:53.565 13:02:12 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:39:53.565 13:02:12 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:39:53.565 13:02:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:39:53.565 13:02:12 -- common/autotest_common.sh@10 -- # set +x 00:39:53.565 ************************************ 00:39:53.565 START TEST lvs_grow_dirty 00:39:53.565 ************************************ 00:39:53.565 13:02:12 -- common/autotest_common.sh@1104 -- # lvs_grow dirty 00:39:53.565 13:02:12 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:39:53.565 13:02:12 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:39:53.565 13:02:12 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:39:53.565 13:02:12 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:39:53.565 13:02:12 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:39:53.565 13:02:12 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:39:53.565 13:02:12 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:39:53.565 13:02:12 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:39:53.565 13:02:12 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:39:53.847 13:02:12 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:39:53.847 13:02:12 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:39:53.847 13:02:13 -- target/nvmf_lvs_grow.sh@28 -- # lvs=7aab83ab-d799-49a8-ad18-8b37f388fd8b 00:39:54.105 13:02:13 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7aab83ab-d799-49a8-ad18-8b37f388fd8b 00:39:54.105 13:02:13 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:39:54.105 13:02:13 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:39:54.105 13:02:13 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:39:54.105 13:02:13 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 7aab83ab-d799-49a8-ad18-8b37f388fd8b lvol 150 00:39:54.673 13:02:13 -- target/nvmf_lvs_grow.sh@33 -- # lvol=e2619e22-b7cf-42b7-bed0-4e84cca57744 00:39:54.673 13:02:13 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:39:54.673 13:02:13 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:39:54.673 [2024-07-22 13:02:14.061343] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:39:54.673 [2024-07-22 13:02:14.061427] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:39:54.673 true 00:39:54.673 13:02:14 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7aab83ab-d799-49a8-ad18-8b37f388fd8b 00:39:54.673 13:02:14 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:39:54.931 13:02:14 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:39:54.931 13:02:14 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:39:55.190 13:02:14 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e2619e22-b7cf-42b7-bed0-4e84cca57744 00:39:55.449 13:02:14 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:55.708 13:02:15 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:55.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:39:55.966 13:02:15 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=83644 00:39:55.966 13:02:15 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:39:55.966 13:02:15 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:39:55.966 13:02:15 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 83644 /var/tmp/bdevperf.sock 00:39:55.966 13:02:15 -- common/autotest_common.sh@819 -- # '[' -z 83644 ']' 00:39:55.966 13:02:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:39:55.966 13:02:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:39:55.966 13:02:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:39:55.966 13:02:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:39:55.966 13:02:15 -- common/autotest_common.sh@10 -- # set +x 00:39:55.966 [2024-07-22 13:02:15.327149] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:39:55.966 [2024-07-22 13:02:15.327263] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83644 ] 00:39:56.225 [2024-07-22 13:02:15.465524] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:56.225 [2024-07-22 13:02:15.561375] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:39:57.160 13:02:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:39:57.160 13:02:16 -- common/autotest_common.sh@852 -- # return 0 00:39:57.160 13:02:16 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:39:57.160 Nvme0n1 00:39:57.419 13:02:16 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:39:57.419 [ 00:39:57.419 { 00:39:57.419 "aliases": [ 00:39:57.419 "e2619e22-b7cf-42b7-bed0-4e84cca57744" 00:39:57.419 ], 00:39:57.419 "assigned_rate_limits": { 00:39:57.419 "r_mbytes_per_sec": 0, 00:39:57.419 "rw_ios_per_sec": 0, 00:39:57.419 "rw_mbytes_per_sec": 0, 00:39:57.419 "w_mbytes_per_sec": 0 00:39:57.419 }, 00:39:57.419 "block_size": 4096, 00:39:57.419 "claimed": false, 00:39:57.419 "driver_specific": { 00:39:57.419 "mp_policy": "active_passive", 00:39:57.419 "nvme": [ 00:39:57.419 { 00:39:57.419 "ctrlr_data": { 00:39:57.419 "ana_reporting": false, 00:39:57.419 "cntlid": 1, 00:39:57.419 "firmware_revision": "24.01.1", 00:39:57.419 "model_number": "SPDK bdev Controller", 00:39:57.419 "multi_ctrlr": true, 00:39:57.419 "oacs": { 00:39:57.419 "firmware": 0, 00:39:57.419 "format": 0, 00:39:57.419 "ns_manage": 0, 00:39:57.419 "security": 0 00:39:57.419 }, 00:39:57.419 "serial_number": "SPDK0", 00:39:57.419 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:57.419 "vendor_id": "0x8086" 00:39:57.419 }, 00:39:57.419 "ns_data": { 00:39:57.419 "can_share": true, 00:39:57.419 "id": 1 00:39:57.419 }, 00:39:57.419 "trid": { 00:39:57.419 "adrfam": "IPv4", 00:39:57.419 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:57.419 "traddr": "10.0.0.2", 00:39:57.419 "trsvcid": "4420", 00:39:57.419 "trtype": "TCP" 00:39:57.420 }, 00:39:57.420 "vs": { 00:39:57.420 "nvme_version": "1.3" 00:39:57.420 } 00:39:57.420 } 00:39:57.420 ] 00:39:57.420 }, 00:39:57.420 "name": "Nvme0n1", 00:39:57.420 "num_blocks": 38912, 00:39:57.420 "product_name": "NVMe disk", 00:39:57.420 "supported_io_types": { 00:39:57.420 "abort": true, 00:39:57.420 "compare": true, 00:39:57.420 "compare_and_write": true, 00:39:57.420 "flush": true, 00:39:57.420 "nvme_admin": true, 00:39:57.420 "nvme_io": true, 00:39:57.420 "read": true, 00:39:57.420 "reset": true, 00:39:57.420 "unmap": true, 00:39:57.420 "write": true, 00:39:57.420 "write_zeroes": true 00:39:57.420 }, 00:39:57.420 "uuid": "e2619e22-b7cf-42b7-bed0-4e84cca57744", 00:39:57.420 "zoned": false 00:39:57.420 } 00:39:57.420 ] 00:39:57.679 13:02:16 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:39:57.679 13:02:16 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=83686 00:39:57.679 13:02:16 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:39:57.679 Running I/O for 10 seconds... 00:39:58.616 Latency(us) 00:39:58.616 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:58.616 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:58.616 Nvme0n1 : 1.00 8501.00 33.21 0.00 0.00 0.00 0.00 0.00 00:39:58.616 =================================================================================================================== 00:39:58.616 Total : 8501.00 33.21 0.00 0.00 0.00 0.00 0.00 00:39:58.616 00:39:59.575 13:02:18 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 7aab83ab-d799-49a8-ad18-8b37f388fd8b 00:39:59.575 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:59.575 Nvme0n1 : 2.00 8388.00 32.77 0.00 0.00 0.00 0.00 0.00 00:39:59.575 =================================================================================================================== 00:39:59.575 Total : 8388.00 32.77 0.00 0.00 0.00 0.00 0.00 00:39:59.575 00:39:59.861 true 00:39:59.861 13:02:19 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7aab83ab-d799-49a8-ad18-8b37f388fd8b 00:39:59.861 13:02:19 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:40:00.120 13:02:19 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:40:00.120 13:02:19 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:40:00.120 13:02:19 -- target/nvmf_lvs_grow.sh@65 -- # wait 83686 00:40:00.688 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:00.688 Nvme0n1 : 3.00 8419.33 32.89 0.00 0.00 0.00 0.00 0.00 00:40:00.688 =================================================================================================================== 00:40:00.688 Total : 8419.33 32.89 0.00 0.00 0.00 0.00 0.00 00:40:00.688 00:40:01.624 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:01.624 Nvme0n1 : 4.00 8310.25 32.46 0.00 0.00 0.00 0.00 0.00 00:40:01.624 =================================================================================================================== 00:40:01.624 Total : 8310.25 32.46 0.00 0.00 0.00 0.00 0.00 00:40:01.624 00:40:02.560 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:02.560 Nvme0n1 : 5.00 8245.20 32.21 0.00 0.00 0.00 0.00 0.00 00:40:02.560 =================================================================================================================== 00:40:02.560 Total : 8245.20 32.21 0.00 0.00 0.00 0.00 0.00 00:40:02.560 00:40:03.936 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:03.936 Nvme0n1 : 6.00 8200.67 32.03 0.00 0.00 0.00 0.00 0.00 00:40:03.936 =================================================================================================================== 00:40:03.936 Total : 8200.67 32.03 0.00 0.00 0.00 0.00 0.00 00:40:03.936 00:40:04.873 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:04.873 Nvme0n1 : 7.00 8035.57 31.39 0.00 0.00 0.00 0.00 0.00 00:40:04.873 =================================================================================================================== 00:40:04.873 Total : 8035.57 31.39 0.00 0.00 0.00 0.00 0.00 00:40:04.873 00:40:05.809 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:05.809 Nvme0n1 : 8.00 8017.38 31.32 0.00 0.00 0.00 0.00 0.00 00:40:05.809 =================================================================================================================== 00:40:05.809 Total : 8017.38 31.32 0.00 0.00 0.00 0.00 0.00 00:40:05.809 00:40:06.753 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:06.753 Nvme0n1 : 9.00 7965.78 31.12 0.00 0.00 0.00 0.00 0.00 00:40:06.753 =================================================================================================================== 00:40:06.753 Total : 7965.78 31.12 0.00 0.00 0.00 0.00 0.00 00:40:06.753 00:40:07.688 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:07.688 Nvme0n1 : 10.00 7924.20 30.95 0.00 0.00 0.00 0.00 0.00 00:40:07.688 =================================================================================================================== 00:40:07.688 Total : 7924.20 30.95 0.00 0.00 0.00 0.00 0.00 00:40:07.688 00:40:07.688 00:40:07.688 Latency(us) 00:40:07.688 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:07.688 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:07.689 Nvme0n1 : 10.01 7925.96 30.96 0.00 0.00 16144.88 5928.03 129642.12 00:40:07.689 =================================================================================================================== 00:40:07.689 Total : 7925.96 30.96 0.00 0.00 16144.88 5928.03 129642.12 00:40:07.689 0 00:40:07.689 13:02:26 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 83644 00:40:07.689 13:02:26 -- common/autotest_common.sh@926 -- # '[' -z 83644 ']' 00:40:07.689 13:02:26 -- common/autotest_common.sh@930 -- # kill -0 83644 00:40:07.689 13:02:26 -- common/autotest_common.sh@931 -- # uname 00:40:07.689 13:02:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:40:07.689 13:02:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 83644 00:40:07.689 killing process with pid 83644 00:40:07.689 Received shutdown signal, test time was about 10.000000 seconds 00:40:07.689 00:40:07.689 Latency(us) 00:40:07.689 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:07.689 =================================================================================================================== 00:40:07.689 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:07.689 13:02:26 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:40:07.689 13:02:26 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:40:07.689 13:02:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 83644' 00:40:07.689 13:02:27 -- common/autotest_common.sh@945 -- # kill 83644 00:40:07.689 13:02:27 -- common/autotest_common.sh@950 -- # wait 83644 00:40:07.947 13:02:27 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:08.206 13:02:27 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7aab83ab-d799-49a8-ad18-8b37f388fd8b 00:40:08.206 13:02:27 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:40:08.464 13:02:27 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:40:08.464 13:02:27 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:40:08.464 13:02:27 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 83036 00:40:08.464 13:02:27 -- target/nvmf_lvs_grow.sh@74 -- # wait 83036 00:40:08.464 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 83036 Killed "${NVMF_APP[@]}" "$@" 00:40:08.464 13:02:27 -- target/nvmf_lvs_grow.sh@74 -- # true 00:40:08.464 13:02:27 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:40:08.464 13:02:27 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:40:08.464 13:02:27 -- common/autotest_common.sh@712 -- # xtrace_disable 00:40:08.464 13:02:27 -- common/autotest_common.sh@10 -- # set +x 00:40:08.464 13:02:27 -- nvmf/common.sh@469 -- # nvmfpid=83843 00:40:08.464 13:02:27 -- nvmf/common.sh@470 -- # waitforlisten 83843 00:40:08.464 13:02:27 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:40:08.464 13:02:27 -- common/autotest_common.sh@819 -- # '[' -z 83843 ']' 00:40:08.464 13:02:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:08.464 13:02:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:40:08.464 13:02:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:08.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:08.464 13:02:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:40:08.464 13:02:27 -- common/autotest_common.sh@10 -- # set +x 00:40:08.464 [2024-07-22 13:02:27.849892] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:40:08.464 [2024-07-22 13:02:27.849992] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:08.722 [2024-07-22 13:02:27.984858] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:08.722 [2024-07-22 13:02:28.069170] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:40:08.722 [2024-07-22 13:02:28.069350] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:08.722 [2024-07-22 13:02:28.069369] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:08.722 [2024-07-22 13:02:28.069381] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:08.722 [2024-07-22 13:02:28.069410] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:40:09.656 13:02:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:40:09.657 13:02:28 -- common/autotest_common.sh@852 -- # return 0 00:40:09.657 13:02:28 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:40:09.657 13:02:28 -- common/autotest_common.sh@718 -- # xtrace_disable 00:40:09.657 13:02:28 -- common/autotest_common.sh@10 -- # set +x 00:40:09.657 13:02:28 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:09.657 13:02:28 -- target/nvmf_lvs_grow.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:40:09.657 [2024-07-22 13:02:29.069968] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:40:09.657 [2024-07-22 13:02:29.070348] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:40:09.657 [2024-07-22 13:02:29.070569] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:40:09.915 13:02:29 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:40:09.915 13:02:29 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev e2619e22-b7cf-42b7-bed0-4e84cca57744 00:40:09.915 13:02:29 -- common/autotest_common.sh@887 -- # local bdev_name=e2619e22-b7cf-42b7-bed0-4e84cca57744 00:40:09.915 13:02:29 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:40:09.915 13:02:29 -- common/autotest_common.sh@889 -- # local i 00:40:09.915 13:02:29 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:40:09.915 13:02:29 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:40:09.915 13:02:29 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:40:10.189 13:02:29 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e2619e22-b7cf-42b7-bed0-4e84cca57744 -t 2000 00:40:10.189 [ 00:40:10.189 { 00:40:10.189 "aliases": [ 00:40:10.189 "lvs/lvol" 00:40:10.189 ], 00:40:10.189 "assigned_rate_limits": { 00:40:10.189 "r_mbytes_per_sec": 0, 00:40:10.189 "rw_ios_per_sec": 0, 00:40:10.189 "rw_mbytes_per_sec": 0, 00:40:10.189 "w_mbytes_per_sec": 0 00:40:10.189 }, 00:40:10.189 "block_size": 4096, 00:40:10.189 "claimed": false, 00:40:10.189 "driver_specific": { 00:40:10.189 "lvol": { 00:40:10.189 "base_bdev": "aio_bdev", 00:40:10.189 "clone": false, 00:40:10.189 "esnap_clone": false, 00:40:10.189 "lvol_store_uuid": "7aab83ab-d799-49a8-ad18-8b37f388fd8b", 00:40:10.189 "snapshot": false, 00:40:10.189 "thin_provision": false 00:40:10.189 } 00:40:10.189 }, 00:40:10.189 "name": "e2619e22-b7cf-42b7-bed0-4e84cca57744", 00:40:10.189 "num_blocks": 38912, 00:40:10.189 "product_name": "Logical Volume", 00:40:10.189 "supported_io_types": { 00:40:10.189 "abort": false, 00:40:10.189 "compare": false, 00:40:10.189 "compare_and_write": false, 00:40:10.189 "flush": false, 00:40:10.189 "nvme_admin": false, 00:40:10.189 "nvme_io": false, 00:40:10.189 "read": true, 00:40:10.189 "reset": true, 00:40:10.189 "unmap": true, 00:40:10.189 "write": true, 00:40:10.189 "write_zeroes": true 00:40:10.189 }, 00:40:10.189 "uuid": "e2619e22-b7cf-42b7-bed0-4e84cca57744", 00:40:10.189 "zoned": false 00:40:10.189 } 00:40:10.189 ] 00:40:10.448 13:02:29 -- common/autotest_common.sh@895 -- # return 0 00:40:10.448 13:02:29 -- target/nvmf_lvs_grow.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7aab83ab-d799-49a8-ad18-8b37f388fd8b 00:40:10.448 13:02:29 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:40:10.448 13:02:29 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:40:10.448 13:02:29 -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7aab83ab-d799-49a8-ad18-8b37f388fd8b 00:40:10.448 13:02:29 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:40:10.706 13:02:30 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:40:10.706 13:02:30 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:40:10.964 [2024-07-22 13:02:30.236298] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:40:10.964 13:02:30 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7aab83ab-d799-49a8-ad18-8b37f388fd8b 00:40:10.964 13:02:30 -- common/autotest_common.sh@640 -- # local es=0 00:40:10.964 13:02:30 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7aab83ab-d799-49a8-ad18-8b37f388fd8b 00:40:10.964 13:02:30 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:40:10.964 13:02:30 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:40:10.964 13:02:30 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:40:10.964 13:02:30 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:40:10.964 13:02:30 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:40:10.964 13:02:30 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:40:10.964 13:02:30 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:40:10.964 13:02:30 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:40:10.964 13:02:30 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7aab83ab-d799-49a8-ad18-8b37f388fd8b 00:40:11.222 2024/07/22 13:02:30 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:7aab83ab-d799-49a8-ad18-8b37f388fd8b], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:40:11.222 request: 00:40:11.222 { 00:40:11.222 "method": "bdev_lvol_get_lvstores", 00:40:11.222 "params": { 00:40:11.222 "uuid": "7aab83ab-d799-49a8-ad18-8b37f388fd8b" 00:40:11.222 } 00:40:11.222 } 00:40:11.222 Got JSON-RPC error response 00:40:11.222 GoRPCClient: error on JSON-RPC call 00:40:11.222 13:02:30 -- common/autotest_common.sh@643 -- # es=1 00:40:11.222 13:02:30 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:40:11.222 13:02:30 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:40:11.222 13:02:30 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:40:11.222 13:02:30 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:40:11.480 aio_bdev 00:40:11.480 13:02:30 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev e2619e22-b7cf-42b7-bed0-4e84cca57744 00:40:11.480 13:02:30 -- common/autotest_common.sh@887 -- # local bdev_name=e2619e22-b7cf-42b7-bed0-4e84cca57744 00:40:11.480 13:02:30 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:40:11.480 13:02:30 -- common/autotest_common.sh@889 -- # local i 00:40:11.480 13:02:30 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:40:11.480 13:02:30 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:40:11.480 13:02:30 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:40:11.739 13:02:31 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e2619e22-b7cf-42b7-bed0-4e84cca57744 -t 2000 00:40:11.998 [ 00:40:11.998 { 00:40:11.998 "aliases": [ 00:40:11.998 "lvs/lvol" 00:40:11.998 ], 00:40:11.998 "assigned_rate_limits": { 00:40:11.998 "r_mbytes_per_sec": 0, 00:40:11.998 "rw_ios_per_sec": 0, 00:40:11.998 "rw_mbytes_per_sec": 0, 00:40:11.998 "w_mbytes_per_sec": 0 00:40:11.998 }, 00:40:11.998 "block_size": 4096, 00:40:11.998 "claimed": false, 00:40:11.998 "driver_specific": { 00:40:11.998 "lvol": { 00:40:11.998 "base_bdev": "aio_bdev", 00:40:11.998 "clone": false, 00:40:11.998 "esnap_clone": false, 00:40:11.998 "lvol_store_uuid": "7aab83ab-d799-49a8-ad18-8b37f388fd8b", 00:40:11.998 "snapshot": false, 00:40:11.998 "thin_provision": false 00:40:11.998 } 00:40:11.998 }, 00:40:11.998 "name": "e2619e22-b7cf-42b7-bed0-4e84cca57744", 00:40:11.998 "num_blocks": 38912, 00:40:11.998 "product_name": "Logical Volume", 00:40:11.998 "supported_io_types": { 00:40:11.998 "abort": false, 00:40:11.998 "compare": false, 00:40:11.998 "compare_and_write": false, 00:40:11.998 "flush": false, 00:40:11.998 "nvme_admin": false, 00:40:11.998 "nvme_io": false, 00:40:11.998 "read": true, 00:40:11.998 "reset": true, 00:40:11.998 "unmap": true, 00:40:11.998 "write": true, 00:40:11.998 "write_zeroes": true 00:40:11.998 }, 00:40:11.998 "uuid": "e2619e22-b7cf-42b7-bed0-4e84cca57744", 00:40:11.998 "zoned": false 00:40:11.998 } 00:40:11.998 ] 00:40:11.998 13:02:31 -- common/autotest_common.sh@895 -- # return 0 00:40:11.998 13:02:31 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7aab83ab-d799-49a8-ad18-8b37f388fd8b 00:40:11.998 13:02:31 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:40:12.256 13:02:31 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:40:12.256 13:02:31 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7aab83ab-d799-49a8-ad18-8b37f388fd8b 00:40:12.256 13:02:31 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:40:12.515 13:02:31 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:40:12.515 13:02:31 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete e2619e22-b7cf-42b7-bed0-4e84cca57744 00:40:12.773 13:02:32 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7aab83ab-d799-49a8-ad18-8b37f388fd8b 00:40:13.036 13:02:32 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:40:13.294 13:02:32 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:40:13.862 ************************************ 00:40:13.862 END TEST lvs_grow_dirty 00:40:13.862 ************************************ 00:40:13.862 00:40:13.862 real 0m20.286s 00:40:13.862 user 0m41.636s 00:40:13.862 sys 0m8.243s 00:40:13.862 13:02:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:13.862 13:02:33 -- common/autotest_common.sh@10 -- # set +x 00:40:13.862 13:02:33 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:40:13.862 13:02:33 -- common/autotest_common.sh@796 -- # type=--id 00:40:13.862 13:02:33 -- common/autotest_common.sh@797 -- # id=0 00:40:13.862 13:02:33 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:40:13.862 13:02:33 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:40:13.862 13:02:33 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:40:13.862 13:02:33 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:40:13.862 13:02:33 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:40:13.862 13:02:33 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:40:13.862 nvmf_trace.0 00:40:13.862 13:02:33 -- common/autotest_common.sh@811 -- # return 0 00:40:13.862 13:02:33 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:40:13.863 13:02:33 -- nvmf/common.sh@476 -- # nvmfcleanup 00:40:13.863 13:02:33 -- nvmf/common.sh@116 -- # sync 00:40:14.122 13:02:33 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:40:14.122 13:02:33 -- nvmf/common.sh@119 -- # set +e 00:40:14.122 13:02:33 -- nvmf/common.sh@120 -- # for i in {1..20} 00:40:14.122 13:02:33 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:40:14.122 rmmod nvme_tcp 00:40:14.122 rmmod nvme_fabrics 00:40:14.122 rmmod nvme_keyring 00:40:14.122 13:02:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:40:14.122 13:02:33 -- nvmf/common.sh@123 -- # set -e 00:40:14.122 13:02:33 -- nvmf/common.sh@124 -- # return 0 00:40:14.122 13:02:33 -- nvmf/common.sh@477 -- # '[' -n 83843 ']' 00:40:14.122 13:02:33 -- nvmf/common.sh@478 -- # killprocess 83843 00:40:14.122 13:02:33 -- common/autotest_common.sh@926 -- # '[' -z 83843 ']' 00:40:14.122 13:02:33 -- common/autotest_common.sh@930 -- # kill -0 83843 00:40:14.122 13:02:33 -- common/autotest_common.sh@931 -- # uname 00:40:14.122 13:02:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:40:14.122 13:02:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 83843 00:40:14.122 killing process with pid 83843 00:40:14.122 13:02:33 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:40:14.122 13:02:33 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:40:14.122 13:02:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 83843' 00:40:14.122 13:02:33 -- common/autotest_common.sh@945 -- # kill 83843 00:40:14.122 13:02:33 -- common/autotest_common.sh@950 -- # wait 83843 00:40:14.382 13:02:33 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:40:14.382 13:02:33 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:40:14.382 13:02:33 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:40:14.382 13:02:33 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:40:14.382 13:02:33 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:40:14.382 13:02:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:14.382 13:02:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:40:14.382 13:02:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:14.382 13:02:33 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:40:14.382 ************************************ 00:40:14.382 END TEST nvmf_lvs_grow 00:40:14.382 ************************************ 00:40:14.382 00:40:14.382 real 0m40.663s 00:40:14.382 user 1m5.210s 00:40:14.382 sys 0m11.073s 00:40:14.382 13:02:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:14.382 13:02:33 -- common/autotest_common.sh@10 -- # set +x 00:40:14.382 13:02:33 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:40:14.382 13:02:33 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:40:14.382 13:02:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:40:14.382 13:02:33 -- common/autotest_common.sh@10 -- # set +x 00:40:14.382 ************************************ 00:40:14.382 START TEST nvmf_bdev_io_wait 00:40:14.382 ************************************ 00:40:14.382 13:02:33 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:40:14.382 * Looking for test storage... 00:40:14.382 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:40:14.382 13:02:33 -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:40:14.382 13:02:33 -- nvmf/common.sh@7 -- # uname -s 00:40:14.382 13:02:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:14.382 13:02:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:14.382 13:02:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:14.382 13:02:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:14.382 13:02:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:14.382 13:02:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:14.382 13:02:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:14.382 13:02:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:14.382 13:02:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:14.382 13:02:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:14.382 13:02:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:864ae4a3-cd96-439b-8ef2-6dab4d992115 00:40:14.382 13:02:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=864ae4a3-cd96-439b-8ef2-6dab4d992115 00:40:14.382 13:02:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:14.382 13:02:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:14.382 13:02:33 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:40:14.382 13:02:33 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:40:14.382 13:02:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:14.382 13:02:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:14.382 13:02:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:14.382 13:02:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:14.382 13:02:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:14.382 13:02:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:14.382 13:02:33 -- paths/export.sh@5 -- # export PATH 00:40:14.382 13:02:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:14.382 13:02:33 -- nvmf/common.sh@46 -- # : 0 00:40:14.382 13:02:33 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:40:14.382 13:02:33 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:40:14.382 13:02:33 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:40:14.382 13:02:33 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:14.382 13:02:33 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:14.382 13:02:33 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:40:14.382 13:02:33 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:40:14.382 13:02:33 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:40:14.642 13:02:33 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:14.642 13:02:33 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:14.642 13:02:33 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:40:14.642 13:02:33 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:40:14.642 13:02:33 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:14.642 13:02:33 -- nvmf/common.sh@436 -- # prepare_net_devs 00:40:14.642 13:02:33 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:40:14.642 13:02:33 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:40:14.642 13:02:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:14.642 13:02:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:40:14.642 13:02:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:14.642 13:02:33 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:40:14.642 13:02:33 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:40:14.642 13:02:33 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:40:14.642 13:02:33 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:40:14.642 13:02:33 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:40:14.642 13:02:33 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:40:14.642 13:02:33 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:14.642 13:02:33 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:14.642 13:02:33 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:40:14.642 13:02:33 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:40:14.642 13:02:33 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:40:14.642 13:02:33 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:40:14.642 13:02:33 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:40:14.642 13:02:33 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:14.642 13:02:33 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:40:14.642 13:02:33 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:40:14.642 13:02:33 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:40:14.642 13:02:33 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:40:14.642 13:02:33 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:40:14.642 13:02:33 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:40:14.642 Cannot find device "nvmf_tgt_br" 00:40:14.642 13:02:33 -- nvmf/common.sh@154 -- # true 00:40:14.642 13:02:33 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:40:14.642 Cannot find device "nvmf_tgt_br2" 00:40:14.642 13:02:33 -- nvmf/common.sh@155 -- # true 00:40:14.642 13:02:33 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:40:14.642 13:02:33 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:40:14.642 Cannot find device "nvmf_tgt_br" 00:40:14.642 13:02:33 -- nvmf/common.sh@157 -- # true 00:40:14.642 13:02:33 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:40:14.642 Cannot find device "nvmf_tgt_br2" 00:40:14.642 13:02:33 -- nvmf/common.sh@158 -- # true 00:40:14.642 13:02:33 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:40:14.642 13:02:33 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:40:14.642 13:02:33 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:40:14.642 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:40:14.642 13:02:33 -- nvmf/common.sh@161 -- # true 00:40:14.642 13:02:33 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:40:14.642 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:40:14.642 13:02:33 -- nvmf/common.sh@162 -- # true 00:40:14.642 13:02:33 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:40:14.642 13:02:33 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:40:14.642 13:02:33 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:40:14.642 13:02:33 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:40:14.642 13:02:33 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:40:14.642 13:02:33 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:40:14.642 13:02:34 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:40:14.642 13:02:34 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:40:14.642 13:02:34 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:40:14.642 13:02:34 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:40:14.642 13:02:34 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:40:14.642 13:02:34 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:40:14.642 13:02:34 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:40:14.642 13:02:34 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:40:14.642 13:02:34 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:40:14.642 13:02:34 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:40:14.901 13:02:34 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:40:14.901 13:02:34 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:40:14.901 13:02:34 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:40:14.901 13:02:34 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:40:14.901 13:02:34 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:40:14.901 13:02:34 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:40:14.901 13:02:34 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:40:14.901 13:02:34 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:40:14.901 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:14.901 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.124 ms 00:40:14.901 00:40:14.901 --- 10.0.0.2 ping statistics --- 00:40:14.901 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:14.901 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:40:14.901 13:02:34 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:40:14.901 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:40:14.901 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:40:14.901 00:40:14.901 --- 10.0.0.3 ping statistics --- 00:40:14.901 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:14.901 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:40:14.901 13:02:34 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:40:14.901 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:14.901 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.049 ms 00:40:14.901 00:40:14.901 --- 10.0.0.1 ping statistics --- 00:40:14.901 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:14.901 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:40:14.901 13:02:34 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:14.901 13:02:34 -- nvmf/common.sh@421 -- # return 0 00:40:14.901 13:02:34 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:40:14.901 13:02:34 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:14.901 13:02:34 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:40:14.901 13:02:34 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:40:14.901 13:02:34 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:14.901 13:02:34 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:40:14.901 13:02:34 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:40:14.901 13:02:34 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:40:14.901 13:02:34 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:40:14.901 13:02:34 -- common/autotest_common.sh@712 -- # xtrace_disable 00:40:14.901 13:02:34 -- common/autotest_common.sh@10 -- # set +x 00:40:14.901 13:02:34 -- nvmf/common.sh@469 -- # nvmfpid=84250 00:40:14.901 13:02:34 -- nvmf/common.sh@470 -- # waitforlisten 84250 00:40:14.901 13:02:34 -- common/autotest_common.sh@819 -- # '[' -z 84250 ']' 00:40:14.901 13:02:34 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:40:14.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:14.901 13:02:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:14.901 13:02:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:40:14.902 13:02:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:14.902 13:02:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:40:14.902 13:02:34 -- common/autotest_common.sh@10 -- # set +x 00:40:14.902 [2024-07-22 13:02:34.220811] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:40:14.902 [2024-07-22 13:02:34.220926] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:15.161 [2024-07-22 13:02:34.365305] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:15.161 [2024-07-22 13:02:34.464492] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:40:15.161 [2024-07-22 13:02:34.464692] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:15.161 [2024-07-22 13:02:34.464708] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:15.161 [2024-07-22 13:02:34.464720] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:15.161 [2024-07-22 13:02:34.464873] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:40:15.161 [2024-07-22 13:02:34.465008] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:40:15.161 [2024-07-22 13:02:34.465156] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:40:15.161 [2024-07-22 13:02:34.465163] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:40:16.098 13:02:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:40:16.098 13:02:35 -- common/autotest_common.sh@852 -- # return 0 00:40:16.098 13:02:35 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:40:16.098 13:02:35 -- common/autotest_common.sh@718 -- # xtrace_disable 00:40:16.098 13:02:35 -- common/autotest_common.sh@10 -- # set +x 00:40:16.098 13:02:35 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:16.098 13:02:35 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:40:16.098 13:02:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:40:16.098 13:02:35 -- common/autotest_common.sh@10 -- # set +x 00:40:16.098 13:02:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:40:16.098 13:02:35 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:40:16.098 13:02:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:40:16.098 13:02:35 -- common/autotest_common.sh@10 -- # set +x 00:40:16.098 13:02:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:40:16.098 13:02:35 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:16.098 13:02:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:40:16.098 13:02:35 -- common/autotest_common.sh@10 -- # set +x 00:40:16.098 [2024-07-22 13:02:35.331757] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:16.098 13:02:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:40:16.098 13:02:35 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:40:16.098 13:02:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:40:16.098 13:02:35 -- common/autotest_common.sh@10 -- # set +x 00:40:16.098 Malloc0 00:40:16.098 13:02:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:40:16.098 13:02:35 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:40:16.098 13:02:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:40:16.098 13:02:35 -- common/autotest_common.sh@10 -- # set +x 00:40:16.098 13:02:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:40:16.098 13:02:35 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:16.098 13:02:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:40:16.098 13:02:35 -- common/autotest_common.sh@10 -- # set +x 00:40:16.098 13:02:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:40:16.098 13:02:35 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:16.098 13:02:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:40:16.098 13:02:35 -- common/autotest_common.sh@10 -- # set +x 00:40:16.098 [2024-07-22 13:02:35.389076] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:16.098 13:02:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:40:16.098 13:02:35 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=84313 00:40:16.098 13:02:35 -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:40:16.098 13:02:35 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:40:16.098 13:02:35 -- target/bdev_io_wait.sh@30 -- # READ_PID=84315 00:40:16.098 13:02:35 -- nvmf/common.sh@520 -- # config=() 00:40:16.098 13:02:35 -- nvmf/common.sh@520 -- # local subsystem config 00:40:16.098 13:02:35 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:40:16.098 13:02:35 -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:40:16.098 13:02:35 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:40:16.098 { 00:40:16.098 "params": { 00:40:16.098 "name": "Nvme$subsystem", 00:40:16.098 "trtype": "$TEST_TRANSPORT", 00:40:16.098 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:16.098 "adrfam": "ipv4", 00:40:16.098 "trsvcid": "$NVMF_PORT", 00:40:16.098 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:16.098 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:16.098 "hdgst": ${hdgst:-false}, 00:40:16.098 "ddgst": ${ddgst:-false} 00:40:16.098 }, 00:40:16.098 "method": "bdev_nvme_attach_controller" 00:40:16.098 } 00:40:16.098 EOF 00:40:16.098 )") 00:40:16.098 13:02:35 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:40:16.098 13:02:35 -- nvmf/common.sh@520 -- # config=() 00:40:16.098 13:02:35 -- nvmf/common.sh@520 -- # local subsystem config 00:40:16.098 13:02:35 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=84317 00:40:16.098 13:02:35 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:40:16.098 13:02:35 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:40:16.098 { 00:40:16.098 "params": { 00:40:16.098 "name": "Nvme$subsystem", 00:40:16.098 "trtype": "$TEST_TRANSPORT", 00:40:16.098 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:16.098 "adrfam": "ipv4", 00:40:16.098 "trsvcid": "$NVMF_PORT", 00:40:16.098 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:16.098 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:16.098 "hdgst": ${hdgst:-false}, 00:40:16.098 "ddgst": ${ddgst:-false} 00:40:16.098 }, 00:40:16.098 "method": "bdev_nvme_attach_controller" 00:40:16.098 } 00:40:16.098 EOF 00:40:16.098 )") 00:40:16.098 13:02:35 -- nvmf/common.sh@542 -- # cat 00:40:16.098 13:02:35 -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:40:16.098 13:02:35 -- nvmf/common.sh@542 -- # cat 00:40:16.098 13:02:35 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=84319 00:40:16.098 13:02:35 -- target/bdev_io_wait.sh@35 -- # sync 00:40:16.098 13:02:35 -- nvmf/common.sh@544 -- # jq . 00:40:16.098 13:02:35 -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:40:16.098 13:02:35 -- nvmf/common.sh@545 -- # IFS=, 00:40:16.098 13:02:35 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:40:16.098 "params": { 00:40:16.098 "name": "Nvme1", 00:40:16.098 "trtype": "tcp", 00:40:16.098 "traddr": "10.0.0.2", 00:40:16.098 "adrfam": "ipv4", 00:40:16.098 "trsvcid": "4420", 00:40:16.098 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:16.099 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:16.099 "hdgst": false, 00:40:16.099 "ddgst": false 00:40:16.099 }, 00:40:16.099 "method": "bdev_nvme_attach_controller" 00:40:16.099 }' 00:40:16.099 13:02:35 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:40:16.099 13:02:35 -- nvmf/common.sh@520 -- # config=() 00:40:16.099 13:02:35 -- nvmf/common.sh@520 -- # local subsystem config 00:40:16.099 13:02:35 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:40:16.099 13:02:35 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:40:16.099 { 00:40:16.099 "params": { 00:40:16.099 "name": "Nvme$subsystem", 00:40:16.099 "trtype": "$TEST_TRANSPORT", 00:40:16.099 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:16.099 "adrfam": "ipv4", 00:40:16.099 "trsvcid": "$NVMF_PORT", 00:40:16.099 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:16.099 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:16.099 "hdgst": ${hdgst:-false}, 00:40:16.099 "ddgst": ${ddgst:-false} 00:40:16.099 }, 00:40:16.099 "method": "bdev_nvme_attach_controller" 00:40:16.099 } 00:40:16.099 EOF 00:40:16.099 )") 00:40:16.099 13:02:35 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:40:16.099 13:02:35 -- nvmf/common.sh@544 -- # jq . 00:40:16.099 13:02:35 -- nvmf/common.sh@520 -- # config=() 00:40:16.099 13:02:35 -- nvmf/common.sh@520 -- # local subsystem config 00:40:16.099 13:02:35 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:40:16.099 13:02:35 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:40:16.099 { 00:40:16.099 "params": { 00:40:16.099 "name": "Nvme$subsystem", 00:40:16.099 "trtype": "$TEST_TRANSPORT", 00:40:16.099 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:16.099 "adrfam": "ipv4", 00:40:16.099 "trsvcid": "$NVMF_PORT", 00:40:16.099 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:16.099 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:16.099 "hdgst": ${hdgst:-false}, 00:40:16.099 "ddgst": ${ddgst:-false} 00:40:16.099 }, 00:40:16.099 "method": "bdev_nvme_attach_controller" 00:40:16.099 } 00:40:16.099 EOF 00:40:16.099 )") 00:40:16.099 13:02:35 -- nvmf/common.sh@542 -- # cat 00:40:16.099 13:02:35 -- nvmf/common.sh@542 -- # cat 00:40:16.099 13:02:35 -- nvmf/common.sh@545 -- # IFS=, 00:40:16.099 13:02:35 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:40:16.099 "params": { 00:40:16.099 "name": "Nvme1", 00:40:16.099 "trtype": "tcp", 00:40:16.099 "traddr": "10.0.0.2", 00:40:16.099 "adrfam": "ipv4", 00:40:16.099 "trsvcid": "4420", 00:40:16.099 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:16.099 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:16.099 "hdgst": false, 00:40:16.099 "ddgst": false 00:40:16.099 }, 00:40:16.099 "method": "bdev_nvme_attach_controller" 00:40:16.099 }' 00:40:16.099 13:02:35 -- nvmf/common.sh@544 -- # jq . 00:40:16.099 13:02:35 -- nvmf/common.sh@544 -- # jq . 00:40:16.099 13:02:35 -- nvmf/common.sh@545 -- # IFS=, 00:40:16.099 13:02:35 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:40:16.099 "params": { 00:40:16.099 "name": "Nvme1", 00:40:16.099 "trtype": "tcp", 00:40:16.099 "traddr": "10.0.0.2", 00:40:16.099 "adrfam": "ipv4", 00:40:16.099 "trsvcid": "4420", 00:40:16.099 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:16.099 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:16.099 "hdgst": false, 00:40:16.099 "ddgst": false 00:40:16.099 }, 00:40:16.099 "method": "bdev_nvme_attach_controller" 00:40:16.099 }' 00:40:16.099 13:02:35 -- nvmf/common.sh@545 -- # IFS=, 00:40:16.099 13:02:35 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:40:16.099 "params": { 00:40:16.099 "name": "Nvme1", 00:40:16.099 "trtype": "tcp", 00:40:16.099 "traddr": "10.0.0.2", 00:40:16.099 "adrfam": "ipv4", 00:40:16.099 "trsvcid": "4420", 00:40:16.099 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:16.099 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:16.099 "hdgst": false, 00:40:16.099 "ddgst": false 00:40:16.099 }, 00:40:16.099 "method": "bdev_nvme_attach_controller" 00:40:16.099 }' 00:40:16.099 [2024-07-22 13:02:35.441706] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:40:16.099 [2024-07-22 13:02:35.441782] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:40:16.099 13:02:35 -- target/bdev_io_wait.sh@37 -- # wait 84313 00:40:16.099 [2024-07-22 13:02:35.475576] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:40:16.099 [2024-07-22 13:02:35.475660] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:40:16.099 [2024-07-22 13:02:35.476532] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:40:16.099 [2024-07-22 13:02:35.476603] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:40:16.099 [2024-07-22 13:02:35.485456] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:40:16.099 [2024-07-22 13:02:35.485563] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:40:16.358 [2024-07-22 13:02:35.645805] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:16.358 [2024-07-22 13:02:35.714464] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:16.358 [2024-07-22 13:02:35.716517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:40:16.617 [2024-07-22 13:02:35.787578] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:40:16.617 [2024-07-22 13:02:35.793600] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:16.617 [2024-07-22 13:02:35.865090] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:40:16.617 [2024-07-22 13:02:35.877070] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:16.617 Running I/O for 1 seconds... 00:40:16.617 Running I/O for 1 seconds... 00:40:16.617 [2024-07-22 13:02:35.946072] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:40:16.617 Running I/O for 1 seconds... 00:40:16.876 Running I/O for 1 seconds... 00:40:17.812 00:40:17.812 Latency(us) 00:40:17.812 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:17.812 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:40:17.812 Nvme1n1 : 1.01 10131.49 39.58 0.00 0.00 12580.22 7626.01 24307.90 00:40:17.812 =================================================================================================================== 00:40:17.812 Total : 10131.49 39.58 0.00 0.00 12580.22 7626.01 24307.90 00:40:17.812 00:40:17.812 Latency(us) 00:40:17.812 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:17.812 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:40:17.812 Nvme1n1 : 1.01 9047.11 35.34 0.00 0.00 14094.03 2204.39 17992.61 00:40:17.812 =================================================================================================================== 00:40:17.812 Total : 9047.11 35.34 0.00 0.00 14094.03 2204.39 17992.61 00:40:17.812 00:40:17.812 Latency(us) 00:40:17.812 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:17.812 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:40:17.812 Nvme1n1 : 1.01 8307.19 32.45 0.00 0.00 15343.51 7387.69 28835.84 00:40:17.812 =================================================================================================================== 00:40:17.812 Total : 8307.19 32.45 0.00 0.00 15343.51 7387.69 28835.84 00:40:17.812 00:40:17.812 Latency(us) 00:40:17.812 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:17.812 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:40:17.812 Nvme1n1 : 1.00 195863.17 765.09 0.00 0.00 650.97 271.83 949.53 00:40:17.812 =================================================================================================================== 00:40:17.812 Total : 195863.17 765.09 0.00 0.00 650.97 271.83 949.53 00:40:18.070 13:02:37 -- target/bdev_io_wait.sh@38 -- # wait 84315 00:40:18.070 13:02:37 -- target/bdev_io_wait.sh@39 -- # wait 84317 00:40:18.070 13:02:37 -- target/bdev_io_wait.sh@40 -- # wait 84319 00:40:18.070 13:02:37 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:18.070 13:02:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:40:18.070 13:02:37 -- common/autotest_common.sh@10 -- # set +x 00:40:18.070 13:02:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:40:18.070 13:02:37 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:40:18.070 13:02:37 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:40:18.070 13:02:37 -- nvmf/common.sh@476 -- # nvmfcleanup 00:40:18.070 13:02:37 -- nvmf/common.sh@116 -- # sync 00:40:18.070 13:02:37 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:40:18.070 13:02:37 -- nvmf/common.sh@119 -- # set +e 00:40:18.070 13:02:37 -- nvmf/common.sh@120 -- # for i in {1..20} 00:40:18.070 13:02:37 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:40:18.070 rmmod nvme_tcp 00:40:18.070 rmmod nvme_fabrics 00:40:18.070 rmmod nvme_keyring 00:40:18.070 13:02:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:40:18.329 13:02:37 -- nvmf/common.sh@123 -- # set -e 00:40:18.329 13:02:37 -- nvmf/common.sh@124 -- # return 0 00:40:18.329 13:02:37 -- nvmf/common.sh@477 -- # '[' -n 84250 ']' 00:40:18.329 13:02:37 -- nvmf/common.sh@478 -- # killprocess 84250 00:40:18.329 13:02:37 -- common/autotest_common.sh@926 -- # '[' -z 84250 ']' 00:40:18.329 13:02:37 -- common/autotest_common.sh@930 -- # kill -0 84250 00:40:18.329 13:02:37 -- common/autotest_common.sh@931 -- # uname 00:40:18.329 13:02:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:40:18.329 13:02:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 84250 00:40:18.329 13:02:37 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:40:18.329 killing process with pid 84250 00:40:18.329 13:02:37 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:40:18.329 13:02:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 84250' 00:40:18.329 13:02:37 -- common/autotest_common.sh@945 -- # kill 84250 00:40:18.329 13:02:37 -- common/autotest_common.sh@950 -- # wait 84250 00:40:18.329 13:02:37 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:40:18.329 13:02:37 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:40:18.329 13:02:37 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:40:18.329 13:02:37 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:40:18.329 13:02:37 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:40:18.329 13:02:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:18.329 13:02:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:40:18.329 13:02:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:18.329 13:02:37 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:40:18.588 00:40:18.588 real 0m4.049s 00:40:18.588 user 0m17.714s 00:40:18.588 sys 0m2.129s 00:40:18.588 ************************************ 00:40:18.588 END TEST nvmf_bdev_io_wait 00:40:18.588 ************************************ 00:40:18.588 13:02:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:18.588 13:02:37 -- common/autotest_common.sh@10 -- # set +x 00:40:18.588 13:02:37 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:40:18.588 13:02:37 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:40:18.588 13:02:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:40:18.588 13:02:37 -- common/autotest_common.sh@10 -- # set +x 00:40:18.588 ************************************ 00:40:18.588 START TEST nvmf_queue_depth 00:40:18.588 ************************************ 00:40:18.588 13:02:37 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:40:18.588 * Looking for test storage... 00:40:18.588 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:40:18.588 13:02:37 -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:40:18.588 13:02:37 -- nvmf/common.sh@7 -- # uname -s 00:40:18.588 13:02:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:18.588 13:02:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:18.588 13:02:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:18.588 13:02:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:18.588 13:02:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:18.588 13:02:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:18.588 13:02:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:18.588 13:02:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:18.588 13:02:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:18.588 13:02:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:18.588 13:02:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:864ae4a3-cd96-439b-8ef2-6dab4d992115 00:40:18.588 13:02:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=864ae4a3-cd96-439b-8ef2-6dab4d992115 00:40:18.588 13:02:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:18.588 13:02:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:18.588 13:02:37 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:40:18.588 13:02:37 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:40:18.588 13:02:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:18.588 13:02:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:18.588 13:02:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:18.588 13:02:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:18.588 13:02:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:18.588 13:02:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:18.588 13:02:37 -- paths/export.sh@5 -- # export PATH 00:40:18.588 13:02:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:18.588 13:02:37 -- nvmf/common.sh@46 -- # : 0 00:40:18.588 13:02:37 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:40:18.588 13:02:37 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:40:18.588 13:02:37 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:40:18.588 13:02:37 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:18.588 13:02:37 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:18.588 13:02:37 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:40:18.588 13:02:37 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:40:18.588 13:02:37 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:40:18.588 13:02:37 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:40:18.588 13:02:37 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:40:18.588 13:02:37 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:40:18.588 13:02:37 -- target/queue_depth.sh@19 -- # nvmftestinit 00:40:18.588 13:02:37 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:40:18.588 13:02:37 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:18.588 13:02:37 -- nvmf/common.sh@436 -- # prepare_net_devs 00:40:18.589 13:02:37 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:40:18.589 13:02:37 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:40:18.589 13:02:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:18.589 13:02:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:40:18.589 13:02:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:18.589 13:02:37 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:40:18.589 13:02:37 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:40:18.589 13:02:37 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:40:18.589 13:02:37 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:40:18.589 13:02:37 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:40:18.589 13:02:37 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:40:18.589 13:02:37 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:18.589 13:02:37 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:18.589 13:02:37 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:40:18.589 13:02:37 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:40:18.589 13:02:37 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:40:18.589 13:02:37 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:40:18.589 13:02:37 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:40:18.589 13:02:37 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:18.589 13:02:37 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:40:18.589 13:02:37 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:40:18.589 13:02:37 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:40:18.589 13:02:37 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:40:18.589 13:02:37 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:40:18.589 13:02:37 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:40:18.589 Cannot find device "nvmf_tgt_br" 00:40:18.589 13:02:37 -- nvmf/common.sh@154 -- # true 00:40:18.589 13:02:37 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:40:18.589 Cannot find device "nvmf_tgt_br2" 00:40:18.589 13:02:37 -- nvmf/common.sh@155 -- # true 00:40:18.589 13:02:37 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:40:18.589 13:02:37 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:40:18.589 Cannot find device "nvmf_tgt_br" 00:40:18.589 13:02:37 -- nvmf/common.sh@157 -- # true 00:40:18.589 13:02:37 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:40:18.589 Cannot find device "nvmf_tgt_br2" 00:40:18.589 13:02:37 -- nvmf/common.sh@158 -- # true 00:40:18.589 13:02:37 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:40:18.589 13:02:38 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:40:18.848 13:02:38 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:40:18.848 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:40:18.848 13:02:38 -- nvmf/common.sh@161 -- # true 00:40:18.848 13:02:38 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:40:18.848 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:40:18.848 13:02:38 -- nvmf/common.sh@162 -- # true 00:40:18.848 13:02:38 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:40:18.848 13:02:38 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:40:18.848 13:02:38 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:40:18.848 13:02:38 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:40:18.848 13:02:38 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:40:18.848 13:02:38 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:40:18.848 13:02:38 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:40:18.848 13:02:38 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:40:18.848 13:02:38 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:40:18.848 13:02:38 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:40:18.848 13:02:38 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:40:18.848 13:02:38 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:40:18.848 13:02:38 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:40:18.848 13:02:38 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:40:18.848 13:02:38 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:40:18.848 13:02:38 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:40:18.848 13:02:38 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:40:18.848 13:02:38 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:40:18.848 13:02:38 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:40:18.848 13:02:38 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:40:18.848 13:02:38 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:40:18.848 13:02:38 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:40:18.848 13:02:38 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:40:18.848 13:02:38 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:40:18.848 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:18.848 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.108 ms 00:40:18.848 00:40:18.848 --- 10.0.0.2 ping statistics --- 00:40:18.848 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:18.848 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:40:18.848 13:02:38 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:40:18.848 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:40:18.848 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.098 ms 00:40:18.848 00:40:18.848 --- 10.0.0.3 ping statistics --- 00:40:18.848 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:18.848 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:40:18.848 13:02:38 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:40:18.848 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:18.848 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:40:18.848 00:40:18.848 --- 10.0.0.1 ping statistics --- 00:40:18.848 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:18.848 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:40:18.848 13:02:38 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:18.848 13:02:38 -- nvmf/common.sh@421 -- # return 0 00:40:18.848 13:02:38 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:40:18.848 13:02:38 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:18.848 13:02:38 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:40:18.848 13:02:38 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:40:18.848 13:02:38 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:18.848 13:02:38 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:40:18.848 13:02:38 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:40:18.848 13:02:38 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:40:18.848 13:02:38 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:40:18.848 13:02:38 -- common/autotest_common.sh@712 -- # xtrace_disable 00:40:18.848 13:02:38 -- common/autotest_common.sh@10 -- # set +x 00:40:18.848 13:02:38 -- nvmf/common.sh@469 -- # nvmfpid=84550 00:40:18.848 13:02:38 -- nvmf/common.sh@470 -- # waitforlisten 84550 00:40:18.848 13:02:38 -- common/autotest_common.sh@819 -- # '[' -z 84550 ']' 00:40:18.848 13:02:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:18.848 13:02:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:40:18.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:18.848 13:02:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:18.848 13:02:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:40:18.848 13:02:38 -- common/autotest_common.sh@10 -- # set +x 00:40:18.848 13:02:38 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:40:19.110 [2024-07-22 13:02:38.298564] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:40:19.110 [2024-07-22 13:02:38.298638] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:19.110 [2024-07-22 13:02:38.433693] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:19.110 [2024-07-22 13:02:38.517872] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:40:19.110 [2024-07-22 13:02:38.518032] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:19.110 [2024-07-22 13:02:38.518043] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:19.110 [2024-07-22 13:02:38.518051] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:19.110 [2024-07-22 13:02:38.518078] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:40:20.046 13:02:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:40:20.046 13:02:39 -- common/autotest_common.sh@852 -- # return 0 00:40:20.046 13:02:39 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:40:20.046 13:02:39 -- common/autotest_common.sh@718 -- # xtrace_disable 00:40:20.046 13:02:39 -- common/autotest_common.sh@10 -- # set +x 00:40:20.046 13:02:39 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:20.046 13:02:39 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:20.046 13:02:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:40:20.046 13:02:39 -- common/autotest_common.sh@10 -- # set +x 00:40:20.046 [2024-07-22 13:02:39.288460] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:20.046 13:02:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:40:20.046 13:02:39 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:40:20.046 13:02:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:40:20.046 13:02:39 -- common/autotest_common.sh@10 -- # set +x 00:40:20.046 Malloc0 00:40:20.046 13:02:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:40:20.046 13:02:39 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:40:20.046 13:02:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:40:20.046 13:02:39 -- common/autotest_common.sh@10 -- # set +x 00:40:20.046 13:02:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:40:20.046 13:02:39 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:20.046 13:02:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:40:20.046 13:02:39 -- common/autotest_common.sh@10 -- # set +x 00:40:20.046 13:02:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:40:20.046 13:02:39 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:20.046 13:02:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:40:20.046 13:02:39 -- common/autotest_common.sh@10 -- # set +x 00:40:20.046 [2024-07-22 13:02:39.351624] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:20.046 13:02:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:40:20.046 13:02:39 -- target/queue_depth.sh@30 -- # bdevperf_pid=84600 00:40:20.046 13:02:39 -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:40:20.046 13:02:39 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:40:20.046 13:02:39 -- target/queue_depth.sh@33 -- # waitforlisten 84600 /var/tmp/bdevperf.sock 00:40:20.046 13:02:39 -- common/autotest_common.sh@819 -- # '[' -z 84600 ']' 00:40:20.046 13:02:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:40:20.046 13:02:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:40:20.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:40:20.046 13:02:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:40:20.046 13:02:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:40:20.046 13:02:39 -- common/autotest_common.sh@10 -- # set +x 00:40:20.046 [2024-07-22 13:02:39.410028] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:40:20.046 [2024-07-22 13:02:39.410151] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84600 ] 00:40:20.305 [2024-07-22 13:02:39.548677] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:20.305 [2024-07-22 13:02:39.638163] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:40:21.241 13:02:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:40:21.241 13:02:40 -- common/autotest_common.sh@852 -- # return 0 00:40:21.241 13:02:40 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:40:21.241 13:02:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:40:21.241 13:02:40 -- common/autotest_common.sh@10 -- # set +x 00:40:21.241 NVMe0n1 00:40:21.241 13:02:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:40:21.241 13:02:40 -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:40:21.241 Running I/O for 10 seconds... 00:40:31.235 00:40:31.235 Latency(us) 00:40:31.235 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:31.235 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:40:31.235 Verification LBA range: start 0x0 length 0x4000 00:40:31.235 NVMe0n1 : 10.06 14662.92 57.28 0.00 0.00 69583.99 13345.51 54573.61 00:40:31.235 =================================================================================================================== 00:40:31.235 Total : 14662.92 57.28 0.00 0.00 69583.99 13345.51 54573.61 00:40:31.235 0 00:40:31.235 13:02:50 -- target/queue_depth.sh@39 -- # killprocess 84600 00:40:31.235 13:02:50 -- common/autotest_common.sh@926 -- # '[' -z 84600 ']' 00:40:31.235 13:02:50 -- common/autotest_common.sh@930 -- # kill -0 84600 00:40:31.235 13:02:50 -- common/autotest_common.sh@931 -- # uname 00:40:31.235 13:02:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:40:31.235 13:02:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 84600 00:40:31.235 13:02:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:40:31.235 killing process with pid 84600 00:40:31.235 13:02:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:40:31.235 13:02:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 84600' 00:40:31.235 Received shutdown signal, test time was about 10.000000 seconds 00:40:31.235 00:40:31.235 Latency(us) 00:40:31.235 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:31.235 =================================================================================================================== 00:40:31.235 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:31.235 13:02:50 -- common/autotest_common.sh@945 -- # kill 84600 00:40:31.235 13:02:50 -- common/autotest_common.sh@950 -- # wait 84600 00:40:31.495 13:02:50 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:40:31.495 13:02:50 -- target/queue_depth.sh@43 -- # nvmftestfini 00:40:31.495 13:02:50 -- nvmf/common.sh@476 -- # nvmfcleanup 00:40:31.495 13:02:50 -- nvmf/common.sh@116 -- # sync 00:40:31.754 13:02:50 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:40:31.754 13:02:50 -- nvmf/common.sh@119 -- # set +e 00:40:31.754 13:02:50 -- nvmf/common.sh@120 -- # for i in {1..20} 00:40:31.754 13:02:50 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:40:31.754 rmmod nvme_tcp 00:40:31.754 rmmod nvme_fabrics 00:40:31.754 rmmod nvme_keyring 00:40:31.754 13:02:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:40:31.754 13:02:50 -- nvmf/common.sh@123 -- # set -e 00:40:31.754 13:02:50 -- nvmf/common.sh@124 -- # return 0 00:40:31.754 13:02:50 -- nvmf/common.sh@477 -- # '[' -n 84550 ']' 00:40:31.754 13:02:50 -- nvmf/common.sh@478 -- # killprocess 84550 00:40:31.754 13:02:50 -- common/autotest_common.sh@926 -- # '[' -z 84550 ']' 00:40:31.754 13:02:50 -- common/autotest_common.sh@930 -- # kill -0 84550 00:40:31.754 13:02:50 -- common/autotest_common.sh@931 -- # uname 00:40:31.754 13:02:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:40:31.754 13:02:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 84550 00:40:31.754 13:02:50 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:40:31.754 killing process with pid 84550 00:40:31.754 13:02:50 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:40:31.754 13:02:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 84550' 00:40:31.754 13:02:50 -- common/autotest_common.sh@945 -- # kill 84550 00:40:31.754 13:02:50 -- common/autotest_common.sh@950 -- # wait 84550 00:40:32.014 13:02:51 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:40:32.014 13:02:51 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:40:32.014 13:02:51 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:40:32.014 13:02:51 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:40:32.014 13:02:51 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:40:32.014 13:02:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:32.014 13:02:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:40:32.014 13:02:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:32.014 13:02:51 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:40:32.014 00:40:32.014 real 0m13.469s 00:40:32.014 user 0m22.805s 00:40:32.014 sys 0m2.281s 00:40:32.014 13:02:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:32.014 ************************************ 00:40:32.014 END TEST nvmf_queue_depth 00:40:32.014 ************************************ 00:40:32.014 13:02:51 -- common/autotest_common.sh@10 -- # set +x 00:40:32.014 13:02:51 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:40:32.014 13:02:51 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:40:32.014 13:02:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:40:32.014 13:02:51 -- common/autotest_common.sh@10 -- # set +x 00:40:32.014 ************************************ 00:40:32.014 START TEST nvmf_multipath 00:40:32.014 ************************************ 00:40:32.014 13:02:51 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:40:32.014 * Looking for test storage... 00:40:32.014 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:40:32.014 13:02:51 -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:40:32.014 13:02:51 -- nvmf/common.sh@7 -- # uname -s 00:40:32.014 13:02:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:32.014 13:02:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:32.014 13:02:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:32.014 13:02:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:32.014 13:02:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:32.014 13:02:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:32.014 13:02:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:32.014 13:02:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:32.014 13:02:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:32.014 13:02:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:32.014 13:02:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:864ae4a3-cd96-439b-8ef2-6dab4d992115 00:40:32.014 13:02:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=864ae4a3-cd96-439b-8ef2-6dab4d992115 00:40:32.014 13:02:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:32.014 13:02:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:32.014 13:02:51 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:40:32.014 13:02:51 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:40:32.014 13:02:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:32.014 13:02:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:32.014 13:02:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:32.014 13:02:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:32.014 13:02:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:32.014 13:02:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:32.014 13:02:51 -- paths/export.sh@5 -- # export PATH 00:40:32.014 13:02:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:32.014 13:02:51 -- nvmf/common.sh@46 -- # : 0 00:40:32.014 13:02:51 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:40:32.014 13:02:51 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:40:32.014 13:02:51 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:40:32.014 13:02:51 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:32.014 13:02:51 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:32.014 13:02:51 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:40:32.014 13:02:51 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:40:32.014 13:02:51 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:40:32.014 13:02:51 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:32.014 13:02:51 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:32.014 13:02:51 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:40:32.014 13:02:51 -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:40:32.014 13:02:51 -- target/multipath.sh@43 -- # nvmftestinit 00:40:32.014 13:02:51 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:40:32.014 13:02:51 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:32.014 13:02:51 -- nvmf/common.sh@436 -- # prepare_net_devs 00:40:32.014 13:02:51 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:40:32.014 13:02:51 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:40:32.014 13:02:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:32.014 13:02:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:40:32.014 13:02:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:32.014 13:02:51 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:40:32.014 13:02:51 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:40:32.014 13:02:51 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:40:32.014 13:02:51 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:40:32.014 13:02:51 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:40:32.014 13:02:51 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:40:32.014 13:02:51 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:32.014 13:02:51 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:32.014 13:02:51 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:40:32.014 13:02:51 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:40:32.014 13:02:51 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:40:32.014 13:02:51 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:40:32.014 13:02:51 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:40:32.014 13:02:51 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:32.014 13:02:51 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:40:32.014 13:02:51 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:40:32.014 13:02:51 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:40:32.014 13:02:51 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:40:32.014 13:02:51 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:40:32.273 13:02:51 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:40:32.273 Cannot find device "nvmf_tgt_br" 00:40:32.273 13:02:51 -- nvmf/common.sh@154 -- # true 00:40:32.273 13:02:51 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:40:32.273 Cannot find device "nvmf_tgt_br2" 00:40:32.273 13:02:51 -- nvmf/common.sh@155 -- # true 00:40:32.273 13:02:51 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:40:32.273 13:02:51 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:40:32.273 Cannot find device "nvmf_tgt_br" 00:40:32.273 13:02:51 -- nvmf/common.sh@157 -- # true 00:40:32.273 13:02:51 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:40:32.273 Cannot find device "nvmf_tgt_br2" 00:40:32.273 13:02:51 -- nvmf/common.sh@158 -- # true 00:40:32.273 13:02:51 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:40:32.273 13:02:51 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:40:32.273 13:02:51 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:40:32.273 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:40:32.273 13:02:51 -- nvmf/common.sh@161 -- # true 00:40:32.273 13:02:51 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:40:32.273 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:40:32.273 13:02:51 -- nvmf/common.sh@162 -- # true 00:40:32.273 13:02:51 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:40:32.273 13:02:51 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:40:32.273 13:02:51 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:40:32.273 13:02:51 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:40:32.273 13:02:51 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:40:32.273 13:02:51 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:40:32.273 13:02:51 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:40:32.273 13:02:51 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:40:32.273 13:02:51 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:40:32.273 13:02:51 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:40:32.273 13:02:51 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:40:32.273 13:02:51 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:40:32.273 13:02:51 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:40:32.273 13:02:51 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:40:32.273 13:02:51 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:40:32.273 13:02:51 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:40:32.273 13:02:51 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:40:32.273 13:02:51 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:40:32.273 13:02:51 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:40:32.273 13:02:51 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:40:32.273 13:02:51 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:40:32.532 13:02:51 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:40:32.532 13:02:51 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:40:32.532 13:02:51 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:40:32.532 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:32.532 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:40:32.532 00:40:32.532 --- 10.0.0.2 ping statistics --- 00:40:32.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:32.532 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:40:32.532 13:02:51 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:40:32.532 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:40:32.532 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:40:32.532 00:40:32.532 --- 10.0.0.3 ping statistics --- 00:40:32.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:32.532 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:40:32.532 13:02:51 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:40:32.532 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:32.532 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:40:32.532 00:40:32.532 --- 10.0.0.1 ping statistics --- 00:40:32.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:32.532 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:40:32.532 13:02:51 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:32.532 13:02:51 -- nvmf/common.sh@421 -- # return 0 00:40:32.532 13:02:51 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:40:32.532 13:02:51 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:32.532 13:02:51 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:40:32.532 13:02:51 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:40:32.532 13:02:51 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:32.532 13:02:51 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:40:32.532 13:02:51 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:40:32.532 13:02:51 -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:40:32.532 13:02:51 -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:40:32.532 13:02:51 -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:40:32.532 13:02:51 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:40:32.532 13:02:51 -- common/autotest_common.sh@712 -- # xtrace_disable 00:40:32.532 13:02:51 -- common/autotest_common.sh@10 -- # set +x 00:40:32.532 13:02:51 -- nvmf/common.sh@469 -- # nvmfpid=84941 00:40:32.532 13:02:51 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:40:32.532 13:02:51 -- nvmf/common.sh@470 -- # waitforlisten 84941 00:40:32.532 13:02:51 -- common/autotest_common.sh@819 -- # '[' -z 84941 ']' 00:40:32.532 13:02:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:32.533 13:02:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:40:32.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:32.533 13:02:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:32.533 13:02:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:40:32.533 13:02:51 -- common/autotest_common.sh@10 -- # set +x 00:40:32.533 [2024-07-22 13:02:51.801976] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:40:32.533 [2024-07-22 13:02:51.802078] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:32.533 [2024-07-22 13:02:51.934525] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:32.791 [2024-07-22 13:02:52.015890] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:40:32.791 [2024-07-22 13:02:52.016035] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:32.791 [2024-07-22 13:02:52.016047] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:32.791 [2024-07-22 13:02:52.016055] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:32.791 [2024-07-22 13:02:52.016164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:40:32.791 [2024-07-22 13:02:52.016591] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:40:32.791 [2024-07-22 13:02:52.016761] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:40:32.791 [2024-07-22 13:02:52.016767] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:40:33.359 13:02:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:40:33.359 13:02:52 -- common/autotest_common.sh@852 -- # return 0 00:40:33.359 13:02:52 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:40:33.359 13:02:52 -- common/autotest_common.sh@718 -- # xtrace_disable 00:40:33.359 13:02:52 -- common/autotest_common.sh@10 -- # set +x 00:40:33.618 13:02:52 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:33.618 13:02:52 -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:40:33.618 [2024-07-22 13:02:52.996309] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:33.876 13:02:53 -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:40:34.135 Malloc0 00:40:34.135 13:02:53 -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:40:34.394 13:02:53 -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:34.659 13:02:53 -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:34.918 [2024-07-22 13:02:54.101060] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:34.918 13:02:54 -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:40:34.918 [2024-07-22 13:02:54.321285] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:40:35.177 13:02:54 -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:864ae4a3-cd96-439b-8ef2-6dab4d992115 --hostid=864ae4a3-cd96-439b-8ef2-6dab4d992115 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:40:35.177 13:02:54 -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:864ae4a3-cd96-439b-8ef2-6dab4d992115 --hostid=864ae4a3-cd96-439b-8ef2-6dab4d992115 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:40:35.435 13:02:54 -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:40:35.435 13:02:54 -- common/autotest_common.sh@1177 -- # local i=0 00:40:35.435 13:02:54 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:40:35.435 13:02:54 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:40:35.435 13:02:54 -- common/autotest_common.sh@1184 -- # sleep 2 00:40:37.968 13:02:56 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:40:37.968 13:02:56 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:40:37.968 13:02:56 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:40:37.968 13:02:56 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:40:37.968 13:02:56 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:40:37.968 13:02:56 -- common/autotest_common.sh@1187 -- # return 0 00:40:37.968 13:02:56 -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:40:37.968 13:02:56 -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:40:37.968 13:02:56 -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:40:37.969 13:02:56 -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:40:37.969 13:02:56 -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:40:37.969 13:02:56 -- target/multipath.sh@38 -- # echo nvme-subsys0 00:40:37.969 13:02:56 -- target/multipath.sh@38 -- # return 0 00:40:37.969 13:02:56 -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:40:37.969 13:02:56 -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:40:37.969 13:02:56 -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:40:37.969 13:02:56 -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:40:37.969 13:02:56 -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:40:37.969 13:02:56 -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:40:37.969 13:02:56 -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:40:37.969 13:02:56 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:40:37.969 13:02:56 -- target/multipath.sh@22 -- # local timeout=20 00:40:37.969 13:02:56 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:40:37.969 13:02:56 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:40:37.969 13:02:56 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:40:37.969 13:02:56 -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:40:37.969 13:02:56 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:40:37.969 13:02:56 -- target/multipath.sh@22 -- # local timeout=20 00:40:37.969 13:02:56 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:40:37.969 13:02:56 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:40:37.969 13:02:56 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:40:37.969 13:02:56 -- target/multipath.sh@85 -- # echo numa 00:40:37.969 13:02:56 -- target/multipath.sh@88 -- # fio_pid=85079 00:40:37.969 13:02:56 -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:40:37.969 13:02:56 -- target/multipath.sh@90 -- # sleep 1 00:40:37.969 [global] 00:40:37.969 thread=1 00:40:37.969 invalidate=1 00:40:37.969 rw=randrw 00:40:37.969 time_based=1 00:40:37.969 runtime=6 00:40:37.969 ioengine=libaio 00:40:37.969 direct=1 00:40:37.969 bs=4096 00:40:37.969 iodepth=128 00:40:37.969 norandommap=0 00:40:37.969 numjobs=1 00:40:37.969 00:40:37.969 verify_dump=1 00:40:37.969 verify_backlog=512 00:40:37.969 verify_state_save=0 00:40:37.969 do_verify=1 00:40:37.969 verify=crc32c-intel 00:40:37.969 [job0] 00:40:37.969 filename=/dev/nvme0n1 00:40:37.969 Could not set queue depth (nvme0n1) 00:40:37.969 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:37.969 fio-3.35 00:40:37.969 Starting 1 thread 00:40:38.543 13:02:57 -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:40:38.801 13:02:58 -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:40:39.059 13:02:58 -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:40:39.059 13:02:58 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:40:39.059 13:02:58 -- target/multipath.sh@22 -- # local timeout=20 00:40:39.059 13:02:58 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:40:39.059 13:02:58 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:40:39.059 13:02:58 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:40:39.059 13:02:58 -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:40:39.059 13:02:58 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:40:39.059 13:02:58 -- target/multipath.sh@22 -- # local timeout=20 00:40:39.059 13:02:58 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:40:39.059 13:02:58 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:40:39.059 13:02:58 -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:40:39.059 13:02:58 -- target/multipath.sh@25 -- # sleep 1s 00:40:39.994 13:02:59 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:40:39.994 13:02:59 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:40:39.994 13:02:59 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:40:39.994 13:02:59 -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:40:40.253 13:02:59 -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:40:40.512 13:02:59 -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:40:40.512 13:02:59 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:40:40.512 13:02:59 -- target/multipath.sh@22 -- # local timeout=20 00:40:40.512 13:02:59 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:40:40.512 13:02:59 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:40:40.512 13:02:59 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:40:40.512 13:02:59 -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:40:40.512 13:02:59 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:40:40.512 13:02:59 -- target/multipath.sh@22 -- # local timeout=20 00:40:40.512 13:02:59 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:40:40.512 13:02:59 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:40:40.512 13:02:59 -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:40:40.512 13:02:59 -- target/multipath.sh@25 -- # sleep 1s 00:40:41.447 13:03:00 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:40:41.447 13:03:00 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:40:41.447 13:03:00 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:40:41.447 13:03:00 -- target/multipath.sh@104 -- # wait 85079 00:40:43.980 00:40:43.980 job0: (groupid=0, jobs=1): err= 0: pid=85100: Mon Jul 22 13:03:03 2024 00:40:43.980 read: IOPS=11.5k, BW=44.9MiB/s (47.1MB/s)(270MiB/6005msec) 00:40:43.980 slat (usec): min=4, max=4879, avg=48.97, stdev=214.44 00:40:43.980 clat (usec): min=1035, max=13030, avg=7515.71, stdev=1156.04 00:40:43.980 lat (usec): min=1315, max=13040, avg=7564.68, stdev=1162.86 00:40:43.980 clat percentiles (usec): 00:40:43.980 | 1.00th=[ 4490], 5.00th=[ 5866], 10.00th=[ 6259], 20.00th=[ 6652], 00:40:43.980 | 30.00th=[ 6980], 40.00th=[ 7177], 50.00th=[ 7439], 60.00th=[ 7701], 00:40:43.980 | 70.00th=[ 8029], 80.00th=[ 8356], 90.00th=[ 8848], 95.00th=[ 9372], 00:40:43.980 | 99.00th=[11076], 99.50th=[11469], 99.90th=[11994], 99.95th=[12256], 00:40:43.980 | 99.99th=[13042] 00:40:43.980 bw ( KiB/s): min= 9800, max=29056, per=53.37%, avg=24534.55, stdev=5774.45, samples=11 00:40:43.980 iops : min= 2450, max= 7264, avg=6133.64, stdev=1443.61, samples=11 00:40:43.980 write: IOPS=6897, BW=26.9MiB/s (28.3MB/s)(145MiB/5381msec); 0 zone resets 00:40:43.980 slat (usec): min=15, max=2533, avg=61.23, stdev=151.54 00:40:43.980 clat (usec): min=648, max=12290, avg=6550.29, stdev=931.09 00:40:43.980 lat (usec): min=1160, max=12315, avg=6611.52, stdev=934.50 00:40:43.980 clat percentiles (usec): 00:40:43.980 | 1.00th=[ 3654], 5.00th=[ 4883], 10.00th=[ 5538], 20.00th=[ 5997], 00:40:43.980 | 30.00th=[ 6259], 40.00th=[ 6456], 50.00th=[ 6587], 60.00th=[ 6783], 00:40:43.980 | 70.00th=[ 6980], 80.00th=[ 7177], 90.00th=[ 7439], 95.00th=[ 7767], 00:40:43.980 | 99.00th=[ 9372], 99.50th=[10028], 99.90th=[11338], 99.95th=[11731], 00:40:43.980 | 99.99th=[12125] 00:40:43.980 bw ( KiB/s): min=10200, max=28888, per=88.99%, avg=24554.91, stdev=5671.37, samples=11 00:40:43.980 iops : min= 2550, max= 7222, avg=6138.73, stdev=1417.84, samples=11 00:40:43.980 lat (usec) : 750=0.01% 00:40:43.980 lat (msec) : 2=0.02%, 4=0.86%, 10=97.00%, 20=2.12% 00:40:43.980 cpu : usr=5.93%, sys=23.85%, ctx=6660, majf=0, minf=145 00:40:43.980 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:40:43.980 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:43.980 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:43.980 issued rwts: total=69016,37118,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:43.980 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:43.980 00:40:43.980 Run status group 0 (all jobs): 00:40:43.980 READ: bw=44.9MiB/s (47.1MB/s), 44.9MiB/s-44.9MiB/s (47.1MB/s-47.1MB/s), io=270MiB (283MB), run=6005-6005msec 00:40:43.980 WRITE: bw=26.9MiB/s (28.3MB/s), 26.9MiB/s-26.9MiB/s (28.3MB/s-28.3MB/s), io=145MiB (152MB), run=5381-5381msec 00:40:43.980 00:40:43.980 Disk stats (read/write): 00:40:43.980 nvme0n1: ios=68041/36378, merge=0/0, ticks=479827/222412, in_queue=702239, util=98.60% 00:40:43.980 13:03:03 -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:40:44.239 13:03:03 -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:40:44.239 13:03:03 -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:40:44.239 13:03:03 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:40:44.239 13:03:03 -- target/multipath.sh@22 -- # local timeout=20 00:40:44.239 13:03:03 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:40:44.239 13:03:03 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:40:44.239 13:03:03 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:40:44.239 13:03:03 -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:40:44.239 13:03:03 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:40:44.239 13:03:03 -- target/multipath.sh@22 -- # local timeout=20 00:40:44.239 13:03:03 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:40:44.239 13:03:03 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:40:44.239 13:03:03 -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:40:44.239 13:03:03 -- target/multipath.sh@25 -- # sleep 1s 00:40:45.615 13:03:04 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:40:45.615 13:03:04 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:40:45.615 13:03:04 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:40:45.615 13:03:04 -- target/multipath.sh@113 -- # echo round-robin 00:40:45.615 13:03:04 -- target/multipath.sh@116 -- # fio_pid=85228 00:40:45.615 13:03:04 -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:40:45.615 13:03:04 -- target/multipath.sh@118 -- # sleep 1 00:40:45.615 [global] 00:40:45.615 thread=1 00:40:45.615 invalidate=1 00:40:45.615 rw=randrw 00:40:45.615 time_based=1 00:40:45.615 runtime=6 00:40:45.615 ioengine=libaio 00:40:45.615 direct=1 00:40:45.615 bs=4096 00:40:45.615 iodepth=128 00:40:45.615 norandommap=0 00:40:45.615 numjobs=1 00:40:45.615 00:40:45.615 verify_dump=1 00:40:45.615 verify_backlog=512 00:40:45.615 verify_state_save=0 00:40:45.615 do_verify=1 00:40:45.615 verify=crc32c-intel 00:40:45.615 [job0] 00:40:45.615 filename=/dev/nvme0n1 00:40:45.615 Could not set queue depth (nvme0n1) 00:40:45.615 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:45.615 fio-3.35 00:40:45.615 Starting 1 thread 00:40:46.551 13:03:05 -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:40:46.551 13:03:05 -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:40:46.810 13:03:06 -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:40:46.810 13:03:06 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:40:46.810 13:03:06 -- target/multipath.sh@22 -- # local timeout=20 00:40:46.810 13:03:06 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:40:46.810 13:03:06 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:40:46.810 13:03:06 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:40:46.810 13:03:06 -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:40:46.810 13:03:06 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:40:46.810 13:03:06 -- target/multipath.sh@22 -- # local timeout=20 00:40:46.810 13:03:06 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:40:46.810 13:03:06 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:40:46.810 13:03:06 -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:40:46.810 13:03:06 -- target/multipath.sh@25 -- # sleep 1s 00:40:47.745 13:03:07 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:40:47.745 13:03:07 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:40:47.745 13:03:07 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:40:47.745 13:03:07 -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:40:48.004 13:03:07 -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:40:48.263 13:03:07 -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:40:48.263 13:03:07 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:40:48.263 13:03:07 -- target/multipath.sh@22 -- # local timeout=20 00:40:48.263 13:03:07 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:40:48.263 13:03:07 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:40:48.263 13:03:07 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:40:48.263 13:03:07 -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:40:48.263 13:03:07 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:40:48.263 13:03:07 -- target/multipath.sh@22 -- # local timeout=20 00:40:48.263 13:03:07 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:40:48.263 13:03:07 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:40:48.263 13:03:07 -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:40:48.263 13:03:07 -- target/multipath.sh@25 -- # sleep 1s 00:40:49.639 13:03:08 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:40:49.639 13:03:08 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:40:49.639 13:03:08 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:40:49.639 13:03:08 -- target/multipath.sh@132 -- # wait 85228 00:40:51.542 00:40:51.542 job0: (groupid=0, jobs=1): err= 0: pid=85249: Mon Jul 22 13:03:10 2024 00:40:51.542 read: IOPS=12.8k, BW=50.1MiB/s (52.5MB/s)(301MiB/6006msec) 00:40:51.542 slat (usec): min=4, max=5377, avg=40.64, stdev=198.79 00:40:51.542 clat (usec): min=266, max=14188, avg=7016.41, stdev=1591.93 00:40:51.542 lat (usec): min=311, max=14197, avg=7057.05, stdev=1608.88 00:40:51.542 clat percentiles (usec): 00:40:51.542 | 1.00th=[ 2573], 5.00th=[ 3916], 10.00th=[ 4817], 20.00th=[ 5997], 00:40:51.542 | 30.00th=[ 6652], 40.00th=[ 6915], 50.00th=[ 7111], 60.00th=[ 7373], 00:40:51.542 | 70.00th=[ 7767], 80.00th=[ 8225], 90.00th=[ 8717], 95.00th=[ 9241], 00:40:51.542 | 99.00th=[11076], 99.50th=[11600], 99.90th=[12387], 99.95th=[12649], 00:40:51.542 | 99.99th=[13435] 00:40:51.542 bw ( KiB/s): min= 4296, max=45184, per=51.89%, avg=26605.09, stdev=11622.38, samples=11 00:40:51.542 iops : min= 1074, max=11296, avg=6651.27, stdev=2905.59, samples=11 00:40:51.542 write: IOPS=7633, BW=29.8MiB/s (31.3MB/s)(151MiB/5055msec); 0 zone resets 00:40:51.542 slat (usec): min=11, max=2426, avg=49.54, stdev=125.79 00:40:51.542 clat (usec): min=495, max=12478, avg=5748.28, stdev=1620.26 00:40:51.542 lat (usec): min=543, max=12505, avg=5797.82, stdev=1635.31 00:40:51.542 clat percentiles (usec): 00:40:51.542 | 1.00th=[ 2212], 5.00th=[ 2868], 10.00th=[ 3294], 20.00th=[ 4080], 00:40:51.542 | 30.00th=[ 4817], 40.00th=[ 5735], 50.00th=[ 6194], 60.00th=[ 6521], 00:40:51.542 | 70.00th=[ 6849], 80.00th=[ 7046], 90.00th=[ 7439], 95.00th=[ 7701], 00:40:51.542 | 99.00th=[ 9110], 99.50th=[10159], 99.90th=[11600], 99.95th=[12125], 00:40:51.542 | 99.99th=[12387] 00:40:51.542 bw ( KiB/s): min= 4792, max=45568, per=87.14%, avg=26606.55, stdev=11419.10, samples=11 00:40:51.542 iops : min= 1198, max=11392, avg=6651.64, stdev=2854.77, samples=11 00:40:51.542 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:40:51.542 lat (msec) : 2=0.25%, 4=9.65%, 10=88.24%, 20=1.85% 00:40:51.542 cpu : usr=6.09%, sys=24.40%, ctx=7326, majf=0, minf=133 00:40:51.542 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:40:51.542 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:51.542 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:51.542 issued rwts: total=76985,38586,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:51.542 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:51.542 00:40:51.542 Run status group 0 (all jobs): 00:40:51.542 READ: bw=50.1MiB/s (52.5MB/s), 50.1MiB/s-50.1MiB/s (52.5MB/s-52.5MB/s), io=301MiB (315MB), run=6006-6006msec 00:40:51.542 WRITE: bw=29.8MiB/s (31.3MB/s), 29.8MiB/s-29.8MiB/s (31.3MB/s-31.3MB/s), io=151MiB (158MB), run=5055-5055msec 00:40:51.542 00:40:51.542 Disk stats (read/write): 00:40:51.542 nvme0n1: ios=75504/38393, merge=0/0, ticks=495198/204477, in_queue=699675, util=98.60% 00:40:51.542 13:03:10 -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:40:51.800 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:40:51.800 13:03:10 -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:40:51.800 13:03:10 -- common/autotest_common.sh@1198 -- # local i=0 00:40:51.800 13:03:10 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:40:51.800 13:03:10 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:51.800 13:03:10 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:51.800 13:03:10 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:40:51.800 13:03:11 -- common/autotest_common.sh@1210 -- # return 0 00:40:51.800 13:03:11 -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:52.058 13:03:11 -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:40:52.058 13:03:11 -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:40:52.058 13:03:11 -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:40:52.058 13:03:11 -- target/multipath.sh@144 -- # nvmftestfini 00:40:52.058 13:03:11 -- nvmf/common.sh@476 -- # nvmfcleanup 00:40:52.058 13:03:11 -- nvmf/common.sh@116 -- # sync 00:40:52.058 13:03:11 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:40:52.058 13:03:11 -- nvmf/common.sh@119 -- # set +e 00:40:52.058 13:03:11 -- nvmf/common.sh@120 -- # for i in {1..20} 00:40:52.058 13:03:11 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:40:52.058 rmmod nvme_tcp 00:40:52.058 rmmod nvme_fabrics 00:40:52.058 rmmod nvme_keyring 00:40:52.058 13:03:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:40:52.058 13:03:11 -- nvmf/common.sh@123 -- # set -e 00:40:52.058 13:03:11 -- nvmf/common.sh@124 -- # return 0 00:40:52.058 13:03:11 -- nvmf/common.sh@477 -- # '[' -n 84941 ']' 00:40:52.058 13:03:11 -- nvmf/common.sh@478 -- # killprocess 84941 00:40:52.058 13:03:11 -- common/autotest_common.sh@926 -- # '[' -z 84941 ']' 00:40:52.058 13:03:11 -- common/autotest_common.sh@930 -- # kill -0 84941 00:40:52.058 13:03:11 -- common/autotest_common.sh@931 -- # uname 00:40:52.058 13:03:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:40:52.058 13:03:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 84941 00:40:52.058 killing process with pid 84941 00:40:52.058 13:03:11 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:40:52.058 13:03:11 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:40:52.058 13:03:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 84941' 00:40:52.058 13:03:11 -- common/autotest_common.sh@945 -- # kill 84941 00:40:52.058 13:03:11 -- common/autotest_common.sh@950 -- # wait 84941 00:40:52.316 13:03:11 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:40:52.316 13:03:11 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:40:52.316 13:03:11 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:40:52.316 13:03:11 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:40:52.316 13:03:11 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:40:52.316 13:03:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:52.316 13:03:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:40:52.316 13:03:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:52.316 13:03:11 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:40:52.575 00:40:52.575 real 0m20.420s 00:40:52.575 user 1m19.826s 00:40:52.575 sys 0m7.002s 00:40:52.575 13:03:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:52.575 13:03:11 -- common/autotest_common.sh@10 -- # set +x 00:40:52.575 ************************************ 00:40:52.575 END TEST nvmf_multipath 00:40:52.575 ************************************ 00:40:52.575 13:03:11 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:40:52.575 13:03:11 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:40:52.575 13:03:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:40:52.575 13:03:11 -- common/autotest_common.sh@10 -- # set +x 00:40:52.575 ************************************ 00:40:52.575 START TEST nvmf_zcopy 00:40:52.575 ************************************ 00:40:52.575 13:03:11 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:40:52.575 * Looking for test storage... 00:40:52.575 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:40:52.575 13:03:11 -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:40:52.575 13:03:11 -- nvmf/common.sh@7 -- # uname -s 00:40:52.575 13:03:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:52.575 13:03:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:52.575 13:03:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:52.575 13:03:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:52.575 13:03:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:52.575 13:03:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:52.575 13:03:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:52.575 13:03:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:52.575 13:03:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:52.575 13:03:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:52.575 13:03:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:864ae4a3-cd96-439b-8ef2-6dab4d992115 00:40:52.575 13:03:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=864ae4a3-cd96-439b-8ef2-6dab4d992115 00:40:52.575 13:03:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:52.575 13:03:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:52.575 13:03:11 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:40:52.575 13:03:11 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:40:52.575 13:03:11 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:52.575 13:03:11 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:52.575 13:03:11 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:52.576 13:03:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:52.576 13:03:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:52.576 13:03:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:52.576 13:03:11 -- paths/export.sh@5 -- # export PATH 00:40:52.576 13:03:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:52.576 13:03:11 -- nvmf/common.sh@46 -- # : 0 00:40:52.576 13:03:11 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:40:52.576 13:03:11 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:40:52.576 13:03:11 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:40:52.576 13:03:11 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:52.576 13:03:11 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:52.576 13:03:11 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:40:52.576 13:03:11 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:40:52.576 13:03:11 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:40:52.576 13:03:11 -- target/zcopy.sh@12 -- # nvmftestinit 00:40:52.576 13:03:11 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:40:52.576 13:03:11 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:52.576 13:03:11 -- nvmf/common.sh@436 -- # prepare_net_devs 00:40:52.576 13:03:11 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:40:52.576 13:03:11 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:40:52.576 13:03:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:52.576 13:03:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:40:52.576 13:03:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:52.576 13:03:11 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:40:52.576 13:03:11 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:40:52.576 13:03:11 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:40:52.576 13:03:11 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:40:52.576 13:03:11 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:40:52.576 13:03:11 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:40:52.576 13:03:11 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:52.576 13:03:11 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:52.576 13:03:11 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:40:52.576 13:03:11 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:40:52.576 13:03:11 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:40:52.576 13:03:11 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:40:52.576 13:03:11 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:40:52.576 13:03:11 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:52.576 13:03:11 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:40:52.576 13:03:11 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:40:52.576 13:03:11 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:40:52.576 13:03:11 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:40:52.576 13:03:11 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:40:52.576 13:03:11 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:40:52.576 Cannot find device "nvmf_tgt_br" 00:40:52.576 13:03:11 -- nvmf/common.sh@154 -- # true 00:40:52.576 13:03:11 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:40:52.576 Cannot find device "nvmf_tgt_br2" 00:40:52.576 13:03:11 -- nvmf/common.sh@155 -- # true 00:40:52.576 13:03:11 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:40:52.576 13:03:11 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:40:52.576 Cannot find device "nvmf_tgt_br" 00:40:52.576 13:03:11 -- nvmf/common.sh@157 -- # true 00:40:52.576 13:03:11 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:40:52.576 Cannot find device "nvmf_tgt_br2" 00:40:52.576 13:03:11 -- nvmf/common.sh@158 -- # true 00:40:52.576 13:03:11 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:40:52.835 13:03:12 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:40:52.835 13:03:12 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:40:52.835 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:40:52.835 13:03:12 -- nvmf/common.sh@161 -- # true 00:40:52.835 13:03:12 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:40:52.835 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:40:52.835 13:03:12 -- nvmf/common.sh@162 -- # true 00:40:52.835 13:03:12 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:40:52.835 13:03:12 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:40:52.835 13:03:12 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:40:52.835 13:03:12 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:40:52.835 13:03:12 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:40:52.835 13:03:12 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:40:52.835 13:03:12 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:40:52.835 13:03:12 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:40:52.835 13:03:12 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:40:52.835 13:03:12 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:40:52.835 13:03:12 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:40:52.835 13:03:12 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:40:52.835 13:03:12 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:40:52.835 13:03:12 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:40:52.835 13:03:12 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:40:52.835 13:03:12 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:40:52.835 13:03:12 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:40:52.835 13:03:12 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:40:52.835 13:03:12 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:40:52.835 13:03:12 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:40:52.835 13:03:12 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:40:52.835 13:03:12 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:40:52.835 13:03:12 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:40:52.835 13:03:12 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:40:52.835 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:52.835 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.088 ms 00:40:52.835 00:40:52.835 --- 10.0.0.2 ping statistics --- 00:40:52.835 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:52.835 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:40:52.835 13:03:12 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:40:52.835 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:40:52.835 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:40:52.835 00:40:52.835 --- 10.0.0.3 ping statistics --- 00:40:52.835 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:52.835 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:40:52.835 13:03:12 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:40:52.835 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:52.835 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:40:52.835 00:40:52.835 --- 10.0.0.1 ping statistics --- 00:40:52.835 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:52.835 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:40:52.835 13:03:12 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:52.835 13:03:12 -- nvmf/common.sh@421 -- # return 0 00:40:52.835 13:03:12 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:40:52.835 13:03:12 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:52.835 13:03:12 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:40:52.835 13:03:12 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:40:52.835 13:03:12 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:52.835 13:03:12 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:40:52.835 13:03:12 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:40:52.835 13:03:12 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:40:52.835 13:03:12 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:40:52.835 13:03:12 -- common/autotest_common.sh@712 -- # xtrace_disable 00:40:52.835 13:03:12 -- common/autotest_common.sh@10 -- # set +x 00:40:52.835 13:03:12 -- nvmf/common.sh@469 -- # nvmfpid=85532 00:40:52.835 13:03:12 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:40:52.835 13:03:12 -- nvmf/common.sh@470 -- # waitforlisten 85532 00:40:52.835 13:03:12 -- common/autotest_common.sh@819 -- # '[' -z 85532 ']' 00:40:52.835 13:03:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:52.835 13:03:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:40:52.835 13:03:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:52.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:52.835 13:03:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:40:52.835 13:03:12 -- common/autotest_common.sh@10 -- # set +x 00:40:53.094 [2024-07-22 13:03:12.306439] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:40:53.094 [2024-07-22 13:03:12.306596] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:53.094 [2024-07-22 13:03:12.451382] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:53.353 [2024-07-22 13:03:12.536691] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:40:53.353 [2024-07-22 13:03:12.537079] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:53.353 [2024-07-22 13:03:12.537236] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:53.353 [2024-07-22 13:03:12.537263] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:53.353 [2024-07-22 13:03:12.537300] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:40:53.919 13:03:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:40:53.919 13:03:13 -- common/autotest_common.sh@852 -- # return 0 00:40:53.919 13:03:13 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:40:53.919 13:03:13 -- common/autotest_common.sh@718 -- # xtrace_disable 00:40:53.919 13:03:13 -- common/autotest_common.sh@10 -- # set +x 00:40:53.919 13:03:13 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:53.919 13:03:13 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:40:53.919 13:03:13 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:40:53.919 13:03:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:40:53.919 13:03:13 -- common/autotest_common.sh@10 -- # set +x 00:40:54.177 [2024-07-22 13:03:13.343857] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:54.177 13:03:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:40:54.177 13:03:13 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:40:54.177 13:03:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:40:54.177 13:03:13 -- common/autotest_common.sh@10 -- # set +x 00:40:54.177 13:03:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:40:54.177 13:03:13 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:54.177 13:03:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:40:54.177 13:03:13 -- common/autotest_common.sh@10 -- # set +x 00:40:54.177 [2024-07-22 13:03:13.359968] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:54.177 13:03:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:40:54.177 13:03:13 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:40:54.177 13:03:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:40:54.177 13:03:13 -- common/autotest_common.sh@10 -- # set +x 00:40:54.177 13:03:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:40:54.177 13:03:13 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:40:54.177 13:03:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:40:54.177 13:03:13 -- common/autotest_common.sh@10 -- # set +x 00:40:54.177 malloc0 00:40:54.177 13:03:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:40:54.177 13:03:13 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:40:54.177 13:03:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:40:54.177 13:03:13 -- common/autotest_common.sh@10 -- # set +x 00:40:54.177 13:03:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:40:54.177 13:03:13 -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:40:54.177 13:03:13 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:40:54.177 13:03:13 -- nvmf/common.sh@520 -- # config=() 00:40:54.177 13:03:13 -- nvmf/common.sh@520 -- # local subsystem config 00:40:54.177 13:03:13 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:40:54.177 13:03:13 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:40:54.177 { 00:40:54.177 "params": { 00:40:54.177 "name": "Nvme$subsystem", 00:40:54.177 "trtype": "$TEST_TRANSPORT", 00:40:54.177 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:54.177 "adrfam": "ipv4", 00:40:54.177 "trsvcid": "$NVMF_PORT", 00:40:54.177 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:54.177 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:54.177 "hdgst": ${hdgst:-false}, 00:40:54.177 "ddgst": ${ddgst:-false} 00:40:54.177 }, 00:40:54.177 "method": "bdev_nvme_attach_controller" 00:40:54.177 } 00:40:54.177 EOF 00:40:54.177 )") 00:40:54.177 13:03:13 -- nvmf/common.sh@542 -- # cat 00:40:54.177 13:03:13 -- nvmf/common.sh@544 -- # jq . 00:40:54.177 13:03:13 -- nvmf/common.sh@545 -- # IFS=, 00:40:54.177 13:03:13 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:40:54.177 "params": { 00:40:54.177 "name": "Nvme1", 00:40:54.177 "trtype": "tcp", 00:40:54.177 "traddr": "10.0.0.2", 00:40:54.177 "adrfam": "ipv4", 00:40:54.177 "trsvcid": "4420", 00:40:54.177 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:54.177 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:54.177 "hdgst": false, 00:40:54.177 "ddgst": false 00:40:54.177 }, 00:40:54.177 "method": "bdev_nvme_attach_controller" 00:40:54.177 }' 00:40:54.177 [2024-07-22 13:03:13.466403] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:40:54.177 [2024-07-22 13:03:13.466552] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85583 ] 00:40:54.438 [2024-07-22 13:03:13.621753] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:54.438 [2024-07-22 13:03:13.715795] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:40:54.702 Running I/O for 10 seconds... 00:41:04.676 00:41:04.676 Latency(us) 00:41:04.676 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:04.676 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:41:04.676 Verification LBA range: start 0x0 length 0x1000 00:41:04.676 Nvme1n1 : 10.01 9365.13 73.17 0.00 0.00 13632.97 968.15 20733.21 00:41:04.676 =================================================================================================================== 00:41:04.676 Total : 9365.13 73.17 0.00 0.00 13632.97 968.15 20733.21 00:41:04.935 13:03:24 -- target/zcopy.sh@39 -- # perfpid=85694 00:41:04.935 13:03:24 -- target/zcopy.sh@41 -- # xtrace_disable 00:41:04.935 13:03:24 -- common/autotest_common.sh@10 -- # set +x 00:41:04.935 13:03:24 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:41:04.935 13:03:24 -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:41:04.935 13:03:24 -- nvmf/common.sh@520 -- # config=() 00:41:04.936 13:03:24 -- nvmf/common.sh@520 -- # local subsystem config 00:41:04.936 13:03:24 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:41:04.936 13:03:24 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:41:04.936 { 00:41:04.936 "params": { 00:41:04.936 "name": "Nvme$subsystem", 00:41:04.936 "trtype": "$TEST_TRANSPORT", 00:41:04.936 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:04.936 "adrfam": "ipv4", 00:41:04.936 "trsvcid": "$NVMF_PORT", 00:41:04.936 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:04.936 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:04.936 "hdgst": ${hdgst:-false}, 00:41:04.936 "ddgst": ${ddgst:-false} 00:41:04.936 }, 00:41:04.936 "method": "bdev_nvme_attach_controller" 00:41:04.936 } 00:41:04.936 EOF 00:41:04.936 )") 00:41:04.936 13:03:24 -- nvmf/common.sh@542 -- # cat 00:41:04.936 13:03:24 -- nvmf/common.sh@544 -- # jq . 00:41:04.936 [2024-07-22 13:03:24.109084] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:04.936 [2024-07-22 13:03:24.109125] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:04.936 13:03:24 -- nvmf/common.sh@545 -- # IFS=, 00:41:04.936 13:03:24 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:41:04.936 "params": { 00:41:04.936 "name": "Nvme1", 00:41:04.936 "trtype": "tcp", 00:41:04.936 "traddr": "10.0.0.2", 00:41:04.936 "adrfam": "ipv4", 00:41:04.936 "trsvcid": "4420", 00:41:04.936 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:04.936 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:04.936 "hdgst": false, 00:41:04.936 "ddgst": false 00:41:04.936 }, 00:41:04.936 "method": "bdev_nvme_attach_controller" 00:41:04.936 }' 00:41:04.936 2024/07/22 13:03:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:04.936 [2024-07-22 13:03:24.121056] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:04.936 [2024-07-22 13:03:24.121079] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:04.936 2024/07/22 13:03:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:04.936 [2024-07-22 13:03:24.133049] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:04.936 [2024-07-22 13:03:24.133070] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:04.936 2024/07/22 13:03:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:04.936 [2024-07-22 13:03:24.145052] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:04.936 [2024-07-22 13:03:24.145073] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:04.936 2024/07/22 13:03:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:04.936 [2024-07-22 13:03:24.152114] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:41:04.936 [2024-07-22 13:03:24.152218] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85694 ] 00:41:04.936 [2024-07-22 13:03:24.157056] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:04.936 [2024-07-22 13:03:24.157077] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:04.936 2024/07/22 13:03:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:04.936 [2024-07-22 13:03:24.169057] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:04.936 [2024-07-22 13:03:24.169076] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:04.936 2024/07/22 13:03:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:04.936 [2024-07-22 13:03:24.181059] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:04.936 [2024-07-22 13:03:24.181074] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:04.936 2024/07/22 13:03:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:04.936 [2024-07-22 13:03:24.193061] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:04.936 [2024-07-22 13:03:24.193080] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:04.936 2024/07/22 13:03:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:04.936 [2024-07-22 13:03:24.205063] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:04.936 [2024-07-22 13:03:24.205082] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:04.936 2024/07/22 13:03:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:04.936 [2024-07-22 13:03:24.217066] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:04.936 [2024-07-22 13:03:24.217081] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:04.936 2024/07/22 13:03:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:04.936 [2024-07-22 13:03:24.229071] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:04.936 [2024-07-22 13:03:24.229090] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:04.936 2024/07/22 13:03:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:04.936 [2024-07-22 13:03:24.241072] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:04.936 [2024-07-22 13:03:24.241087] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:04.936 2024/07/22 13:03:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:04.936 [2024-07-22 13:03:24.253076] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:04.936 [2024-07-22 13:03:24.253092] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:04.936 2024/07/22 13:03:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:04.936 [2024-07-22 13:03:24.265081] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:04.936 [2024-07-22 13:03:24.265099] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:04.936 2024/07/22 13:03:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:04.936 [2024-07-22 13:03:24.277084] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:04.936 [2024-07-22 13:03:24.277103] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:04.936 2024/07/22 13:03:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:04.936 [2024-07-22 13:03:24.286576] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:04.936 [2024-07-22 13:03:24.289090] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:04.936 [2024-07-22 13:03:24.289110] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:04.936 2024/07/22 13:03:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:04.936 [2024-07-22 13:03:24.301102] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:04.936 [2024-07-22 13:03:24.301126] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:04.936 2024/07/22 13:03:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:04.936 [2024-07-22 13:03:24.313094] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:04.936 [2024-07-22 13:03:24.313113] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:04.936 2024/07/22 13:03:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:04.936 [2024-07-22 13:03:24.325095] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:04.936 [2024-07-22 13:03:24.325114] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:04.936 2024/07/22 13:03:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:04.936 [2024-07-22 13:03:24.337110] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:04.936 [2024-07-22 13:03:24.337144] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:04.936 2024/07/22 13:03:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:04.936 [2024-07-22 13:03:24.349106] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:04.937 [2024-07-22 13:03:24.349125] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:04.937 2024/07/22 13:03:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.196 [2024-07-22 13:03:24.357418] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:41:05.196 [2024-07-22 13:03:24.361124] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.196 [2024-07-22 13:03:24.361166] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.196 2024/07/22 13:03:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.196 [2024-07-22 13:03:24.373109] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.196 [2024-07-22 13:03:24.373128] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.196 2024/07/22 13:03:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.196 [2024-07-22 13:03:24.385119] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.196 [2024-07-22 13:03:24.385152] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.196 2024/07/22 13:03:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.196 [2024-07-22 13:03:24.397123] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.196 [2024-07-22 13:03:24.397171] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.196 2024/07/22 13:03:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.196 [2024-07-22 13:03:24.409127] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.196 [2024-07-22 13:03:24.409160] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.196 2024/07/22 13:03:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.196 [2024-07-22 13:03:24.421125] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.196 [2024-07-22 13:03:24.421155] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.196 2024/07/22 13:03:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.196 [2024-07-22 13:03:24.433140] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.196 [2024-07-22 13:03:24.433206] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.196 2024/07/22 13:03:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.196 [2024-07-22 13:03:24.445124] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.196 [2024-07-22 13:03:24.445189] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.196 2024/07/22 13:03:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.196 [2024-07-22 13:03:24.457120] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.196 [2024-07-22 13:03:24.457165] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.196 2024/07/22 13:03:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.196 [2024-07-22 13:03:24.469166] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.196 [2024-07-22 13:03:24.469191] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.196 2024/07/22 13:03:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.196 [2024-07-22 13:03:24.481135] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.196 [2024-07-22 13:03:24.481181] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.196 2024/07/22 13:03:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.196 [2024-07-22 13:03:24.493172] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.196 [2024-07-22 13:03:24.493196] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.196 2024/07/22 13:03:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.196 [2024-07-22 13:03:24.505156] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.196 [2024-07-22 13:03:24.505179] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.196 2024/07/22 13:03:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.196 [2024-07-22 13:03:24.517173] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.196 [2024-07-22 13:03:24.517194] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.196 2024/07/22 13:03:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.196 [2024-07-22 13:03:24.529174] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.196 [2024-07-22 13:03:24.529198] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.196 2024/07/22 13:03:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.196 Running I/O for 5 seconds... 00:41:05.197 [2024-07-22 13:03:24.541168] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.197 [2024-07-22 13:03:24.541209] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.197 2024/07/22 13:03:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.197 [2024-07-22 13:03:24.557923] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.197 [2024-07-22 13:03:24.557969] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.197 2024/07/22 13:03:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.197 [2024-07-22 13:03:24.569609] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.197 [2024-07-22 13:03:24.569670] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.197 2024/07/22 13:03:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.197 [2024-07-22 13:03:24.586625] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.197 [2024-07-22 13:03:24.586674] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.197 2024/07/22 13:03:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.197 [2024-07-22 13:03:24.601246] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.197 [2024-07-22 13:03:24.601293] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.197 2024/07/22 13:03:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.456 [2024-07-22 13:03:24.617234] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.456 [2024-07-22 13:03:24.617309] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.456 2024/07/22 13:03:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.456 [2024-07-22 13:03:24.633479] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.456 [2024-07-22 13:03:24.633526] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.456 2024/07/22 13:03:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.456 [2024-07-22 13:03:24.650117] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.456 [2024-07-22 13:03:24.650174] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.456 2024/07/22 13:03:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.456 [2024-07-22 13:03:24.666998] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.456 [2024-07-22 13:03:24.667044] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.456 2024/07/22 13:03:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.456 [2024-07-22 13:03:24.683835] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.456 [2024-07-22 13:03:24.683882] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.456 2024/07/22 13:03:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.456 [2024-07-22 13:03:24.699616] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.456 [2024-07-22 13:03:24.699663] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.456 2024/07/22 13:03:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.456 [2024-07-22 13:03:24.717085] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.456 [2024-07-22 13:03:24.717131] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.457 2024/07/22 13:03:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.457 [2024-07-22 13:03:24.730405] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.457 [2024-07-22 13:03:24.730460] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.457 2024/07/22 13:03:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.457 [2024-07-22 13:03:24.745530] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.457 [2024-07-22 13:03:24.745576] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.457 2024/07/22 13:03:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.457 [2024-07-22 13:03:24.763171] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.457 [2024-07-22 13:03:24.763226] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.457 2024/07/22 13:03:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.457 [2024-07-22 13:03:24.779608] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.457 [2024-07-22 13:03:24.779655] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.457 2024/07/22 13:03:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.457 [2024-07-22 13:03:24.795866] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.457 [2024-07-22 13:03:24.795912] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.457 2024/07/22 13:03:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.457 [2024-07-22 13:03:24.812537] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.457 [2024-07-22 13:03:24.812583] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.457 2024/07/22 13:03:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.457 [2024-07-22 13:03:24.829747] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.457 [2024-07-22 13:03:24.829793] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.457 2024/07/22 13:03:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.457 [2024-07-22 13:03:24.846719] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.457 [2024-07-22 13:03:24.846753] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.457 2024/07/22 13:03:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.457 [2024-07-22 13:03:24.863614] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.457 [2024-07-22 13:03:24.863661] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.457 2024/07/22 13:03:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.716 [2024-07-22 13:03:24.880389] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.716 [2024-07-22 13:03:24.880421] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.716 2024/07/22 13:03:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.716 [2024-07-22 13:03:24.896933] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.716 [2024-07-22 13:03:24.896980] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.716 2024/07/22 13:03:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.716 [2024-07-22 13:03:24.912759] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.716 [2024-07-22 13:03:24.912805] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.716 2024/07/22 13:03:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.716 [2024-07-22 13:03:24.930249] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.716 [2024-07-22 13:03:24.930299] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.716 2024/07/22 13:03:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.716 [2024-07-22 13:03:24.947836] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.716 [2024-07-22 13:03:24.947870] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.716 2024/07/22 13:03:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.716 [2024-07-22 13:03:24.962135] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.716 [2024-07-22 13:03:24.962194] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.716 2024/07/22 13:03:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.716 [2024-07-22 13:03:24.979191] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.716 [2024-07-22 13:03:24.979262] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.716 2024/07/22 13:03:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.716 [2024-07-22 13:03:24.993568] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.716 [2024-07-22 13:03:24.993619] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.716 2024/07/22 13:03:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.716 [2024-07-22 13:03:25.009939] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.716 [2024-07-22 13:03:25.009986] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.716 2024/07/22 13:03:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.716 [2024-07-22 13:03:25.026795] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.716 [2024-07-22 13:03:25.026843] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.716 2024/07/22 13:03:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.716 [2024-07-22 13:03:25.042878] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.716 [2024-07-22 13:03:25.042924] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.716 2024/07/22 13:03:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.716 [2024-07-22 13:03:25.060312] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.716 [2024-07-22 13:03:25.060359] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.716 2024/07/22 13:03:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.716 [2024-07-22 13:03:25.076889] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.716 [2024-07-22 13:03:25.076936] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.716 2024/07/22 13:03:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.716 [2024-07-22 13:03:25.093407] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.716 [2024-07-22 13:03:25.093460] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.716 2024/07/22 13:03:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.716 [2024-07-22 13:03:25.109511] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.716 [2024-07-22 13:03:25.109558] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.716 2024/07/22 13:03:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.716 [2024-07-22 13:03:25.126297] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.717 [2024-07-22 13:03:25.126343] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.717 2024/07/22 13:03:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.976 [2024-07-22 13:03:25.143484] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.976 [2024-07-22 13:03:25.143531] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.976 2024/07/22 13:03:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.976 [2024-07-22 13:03:25.159256] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.976 [2024-07-22 13:03:25.159302] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.976 2024/07/22 13:03:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.976 [2024-07-22 13:03:25.176483] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.976 [2024-07-22 13:03:25.176530] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.976 2024/07/22 13:03:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.976 [2024-07-22 13:03:25.192845] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.976 [2024-07-22 13:03:25.192892] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.976 2024/07/22 13:03:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.976 [2024-07-22 13:03:25.210292] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.976 [2024-07-22 13:03:25.210371] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.976 2024/07/22 13:03:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.976 [2024-07-22 13:03:25.225877] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.976 [2024-07-22 13:03:25.225922] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.976 2024/07/22 13:03:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.976 [2024-07-22 13:03:25.236059] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.976 [2024-07-22 13:03:25.236104] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.976 2024/07/22 13:03:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.976 [2024-07-22 13:03:25.250196] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.976 [2024-07-22 13:03:25.250272] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.976 2024/07/22 13:03:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.976 [2024-07-22 13:03:25.264490] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.976 [2024-07-22 13:03:25.264525] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.976 2024/07/22 13:03:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.976 [2024-07-22 13:03:25.279522] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.976 [2024-07-22 13:03:25.279570] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.976 2024/07/22 13:03:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.976 [2024-07-22 13:03:25.290306] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.976 [2024-07-22 13:03:25.290353] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.976 2024/07/22 13:03:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.976 [2024-07-22 13:03:25.307320] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.976 [2024-07-22 13:03:25.307347] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.976 2024/07/22 13:03:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.976 [2024-07-22 13:03:25.322724] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.976 [2024-07-22 13:03:25.322806] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.976 2024/07/22 13:03:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.976 [2024-07-22 13:03:25.340318] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.976 [2024-07-22 13:03:25.340365] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.976 2024/07/22 13:03:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.976 [2024-07-22 13:03:25.356454] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.976 [2024-07-22 13:03:25.356502] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.976 2024/07/22 13:03:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.976 [2024-07-22 13:03:25.373081] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.976 [2024-07-22 13:03:25.373129] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.976 2024/07/22 13:03:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.976 [2024-07-22 13:03:25.388507] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.976 [2024-07-22 13:03:25.388571] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.976 2024/07/22 13:03:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.236 [2024-07-22 13:03:25.403606] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.236 [2024-07-22 13:03:25.403652] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.236 2024/07/22 13:03:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.236 [2024-07-22 13:03:25.417567] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.236 [2024-07-22 13:03:25.417614] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.236 2024/07/22 13:03:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.236 [2024-07-22 13:03:25.433756] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.236 [2024-07-22 13:03:25.433804] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.236 2024/07/22 13:03:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.236 [2024-07-22 13:03:25.450082] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.236 [2024-07-22 13:03:25.450129] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.236 2024/07/22 13:03:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.236 [2024-07-22 13:03:25.467839] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.236 [2024-07-22 13:03:25.467886] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.236 2024/07/22 13:03:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.236 [2024-07-22 13:03:25.483636] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.236 [2024-07-22 13:03:25.483683] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.236 2024/07/22 13:03:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.236 [2024-07-22 13:03:25.498575] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.236 [2024-07-22 13:03:25.498611] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.236 2024/07/22 13:03:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.236 [2024-07-22 13:03:25.513028] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.236 [2024-07-22 13:03:25.513074] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.236 2024/07/22 13:03:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.236 [2024-07-22 13:03:25.528305] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.236 [2024-07-22 13:03:25.528351] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.236 2024/07/22 13:03:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.236 [2024-07-22 13:03:25.545994] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.236 [2024-07-22 13:03:25.546041] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.236 2024/07/22 13:03:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.236 [2024-07-22 13:03:25.560405] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.236 [2024-07-22 13:03:25.560452] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.236 2024/07/22 13:03:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.236 [2024-07-22 13:03:25.575898] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.236 [2024-07-22 13:03:25.575930] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.236 2024/07/22 13:03:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.236 [2024-07-22 13:03:25.593040] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.236 [2024-07-22 13:03:25.593090] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.236 2024/07/22 13:03:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.236 [2024-07-22 13:03:25.608804] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.236 [2024-07-22 13:03:25.608849] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.236 2024/07/22 13:03:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.236 [2024-07-22 13:03:25.618417] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.236 [2024-07-22 13:03:25.618485] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.236 2024/07/22 13:03:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.236 [2024-07-22 13:03:25.632585] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.236 [2024-07-22 13:03:25.632630] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.236 2024/07/22 13:03:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.236 [2024-07-22 13:03:25.649430] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.236 [2024-07-22 13:03:25.649475] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.236 2024/07/22 13:03:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.495 [2024-07-22 13:03:25.664079] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.496 [2024-07-22 13:03:25.664124] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.496 2024/07/22 13:03:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.496 [2024-07-22 13:03:25.679436] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.496 [2024-07-22 13:03:25.679481] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.496 2024/07/22 13:03:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.496 [2024-07-22 13:03:25.690640] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.496 [2024-07-22 13:03:25.690686] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.496 2024/07/22 13:03:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.496 [2024-07-22 13:03:25.707235] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.496 [2024-07-22 13:03:25.707279] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.496 2024/07/22 13:03:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.496 [2024-07-22 13:03:25.723465] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.496 [2024-07-22 13:03:25.723510] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.496 2024/07/22 13:03:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.496 [2024-07-22 13:03:25.739333] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.496 [2024-07-22 13:03:25.739378] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.496 2024/07/22 13:03:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.496 [2024-07-22 13:03:25.756077] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.496 [2024-07-22 13:03:25.756125] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.496 2024/07/22 13:03:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.496 [2024-07-22 13:03:25.772342] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.496 [2024-07-22 13:03:25.772388] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.496 2024/07/22 13:03:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.496 [2024-07-22 13:03:25.789429] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.496 [2024-07-22 13:03:25.789476] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.496 2024/07/22 13:03:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.496 [2024-07-22 13:03:25.805864] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.496 [2024-07-22 13:03:25.805911] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.496 2024/07/22 13:03:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.496 [2024-07-22 13:03:25.823800] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.496 [2024-07-22 13:03:25.823847] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.496 2024/07/22 13:03:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.496 [2024-07-22 13:03:25.838981] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.496 [2024-07-22 13:03:25.839026] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.496 2024/07/22 13:03:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.496 [2024-07-22 13:03:25.850340] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.496 [2024-07-22 13:03:25.850399] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.496 2024/07/22 13:03:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.496 [2024-07-22 13:03:25.866553] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.496 [2024-07-22 13:03:25.866600] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.496 2024/07/22 13:03:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.496 [2024-07-22 13:03:25.883623] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.496 [2024-07-22 13:03:25.883670] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.496 2024/07/22 13:03:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.496 [2024-07-22 13:03:25.900463] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.496 [2024-07-22 13:03:25.900510] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.496 2024/07/22 13:03:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.755 [2024-07-22 13:03:25.917078] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.755 [2024-07-22 13:03:25.917126] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.755 2024/07/22 13:03:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.755 [2024-07-22 13:03:25.932992] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.755 [2024-07-22 13:03:25.933039] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.755 2024/07/22 13:03:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.755 [2024-07-22 13:03:25.948198] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.755 [2024-07-22 13:03:25.948243] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.755 2024/07/22 13:03:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.755 [2024-07-22 13:03:25.959562] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.755 [2024-07-22 13:03:25.959609] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.756 2024/07/22 13:03:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.756 [2024-07-22 13:03:25.975868] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.756 [2024-07-22 13:03:25.975914] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.756 2024/07/22 13:03:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.756 [2024-07-22 13:03:25.992757] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.756 [2024-07-22 13:03:25.992805] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.756 2024/07/22 13:03:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.756 [2024-07-22 13:03:26.008234] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.756 [2024-07-22 13:03:26.008280] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.756 2024/07/22 13:03:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.756 [2024-07-22 13:03:26.023865] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.756 [2024-07-22 13:03:26.023913] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.756 2024/07/22 13:03:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.756 [2024-07-22 13:03:26.040998] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.756 [2024-07-22 13:03:26.041046] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.756 2024/07/22 13:03:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.756 [2024-07-22 13:03:26.056107] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.756 [2024-07-22 13:03:26.056179] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.756 2024/07/22 13:03:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.756 [2024-07-22 13:03:26.071470] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.756 [2024-07-22 13:03:26.071518] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.756 2024/07/22 13:03:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.756 [2024-07-22 13:03:26.081808] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.756 [2024-07-22 13:03:26.081854] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.756 2024/07/22 13:03:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.756 [2024-07-22 13:03:26.095934] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.756 [2024-07-22 13:03:26.095967] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.756 2024/07/22 13:03:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.756 [2024-07-22 13:03:26.104995] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.756 [2024-07-22 13:03:26.105041] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.756 2024/07/22 13:03:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.756 [2024-07-22 13:03:26.119963] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.756 [2024-07-22 13:03:26.120009] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.756 2024/07/22 13:03:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.756 [2024-07-22 13:03:26.135446] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.756 [2024-07-22 13:03:26.135492] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.756 2024/07/22 13:03:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.756 [2024-07-22 13:03:26.146697] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.756 [2024-07-22 13:03:26.146746] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.756 2024/07/22 13:03:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.756 [2024-07-22 13:03:26.163242] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.756 [2024-07-22 13:03:26.163287] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.756 2024/07/22 13:03:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.016 [2024-07-22 13:03:26.179856] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.016 [2024-07-22 13:03:26.179906] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.016 2024/07/22 13:03:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.016 [2024-07-22 13:03:26.197566] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.016 [2024-07-22 13:03:26.197613] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.016 2024/07/22 13:03:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.016 [2024-07-22 13:03:26.212585] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.016 [2024-07-22 13:03:26.212631] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.016 2024/07/22 13:03:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.016 [2024-07-22 13:03:26.228971] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.016 [2024-07-22 13:03:26.229019] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.016 2024/07/22 13:03:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.016 [2024-07-22 13:03:26.244759] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.016 [2024-07-22 13:03:26.244826] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.016 2024/07/22 13:03:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.016 [2024-07-22 13:03:26.261063] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.016 [2024-07-22 13:03:26.261110] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.016 2024/07/22 13:03:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.016 [2024-07-22 13:03:26.279113] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.016 [2024-07-22 13:03:26.279173] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.016 2024/07/22 13:03:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.016 [2024-07-22 13:03:26.293861] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.016 [2024-07-22 13:03:26.293908] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.016 2024/07/22 13:03:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.016 [2024-07-22 13:03:26.309676] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.016 [2024-07-22 13:03:26.309725] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.016 2024/07/22 13:03:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.016 [2024-07-22 13:03:26.328032] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.016 [2024-07-22 13:03:26.328081] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.016 2024/07/22 13:03:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.016 [2024-07-22 13:03:26.342697] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.016 [2024-07-22 13:03:26.342732] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.016 2024/07/22 13:03:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.016 [2024-07-22 13:03:26.357782] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.016 [2024-07-22 13:03:26.357828] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.016 2024/07/22 13:03:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.016 [2024-07-22 13:03:26.372685] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.016 [2024-07-22 13:03:26.372732] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.016 2024/07/22 13:03:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.016 [2024-07-22 13:03:26.387862] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.016 [2024-07-22 13:03:26.387908] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.016 2024/07/22 13:03:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.016 [2024-07-22 13:03:26.400038] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.016 [2024-07-22 13:03:26.400085] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.016 2024/07/22 13:03:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.016 [2024-07-22 13:03:26.415932] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.016 [2024-07-22 13:03:26.415979] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.016 2024/07/22 13:03:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.016 [2024-07-22 13:03:26.433422] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.016 [2024-07-22 13:03:26.433469] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.276 2024/07/22 13:03:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.276 [2024-07-22 13:03:26.448130] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.276 [2024-07-22 13:03:26.448187] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.276 2024/07/22 13:03:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.276 [2024-07-22 13:03:26.463914] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.276 [2024-07-22 13:03:26.463960] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.276 2024/07/22 13:03:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.276 [2024-07-22 13:03:26.480560] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.276 [2024-07-22 13:03:26.480607] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.276 2024/07/22 13:03:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.276 [2024-07-22 13:03:26.497607] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.276 [2024-07-22 13:03:26.497654] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.276 2024/07/22 13:03:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.276 [2024-07-22 13:03:26.513412] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.276 [2024-07-22 13:03:26.513460] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.276 2024/07/22 13:03:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.276 [2024-07-22 13:03:26.530961] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.276 [2024-07-22 13:03:26.531006] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.276 2024/07/22 13:03:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.276 [2024-07-22 13:03:26.546038] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.276 [2024-07-22 13:03:26.546084] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.276 2024/07/22 13:03:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.276 [2024-07-22 13:03:26.561867] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.276 [2024-07-22 13:03:26.561914] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.276 2024/07/22 13:03:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.276 [2024-07-22 13:03:26.578841] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.276 [2024-07-22 13:03:26.578890] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.276 2024/07/22 13:03:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.276 [2024-07-22 13:03:26.596677] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.276 [2024-07-22 13:03:26.596723] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.276 2024/07/22 13:03:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.276 [2024-07-22 13:03:26.610768] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.276 [2024-07-22 13:03:26.610833] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.276 2024/07/22 13:03:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.276 [2024-07-22 13:03:26.627332] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.276 [2024-07-22 13:03:26.627379] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.276 2024/07/22 13:03:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.276 [2024-07-22 13:03:26.643906] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.276 [2024-07-22 13:03:26.643953] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.276 2024/07/22 13:03:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.276 [2024-07-22 13:03:26.662277] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.276 [2024-07-22 13:03:26.662337] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.276 2024/07/22 13:03:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.276 [2024-07-22 13:03:26.676509] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.276 [2024-07-22 13:03:26.676555] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.276 2024/07/22 13:03:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.276 [2024-07-22 13:03:26.690077] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.276 [2024-07-22 13:03:26.690124] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.276 2024/07/22 13:03:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.535 [2024-07-22 13:03:26.706975] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.535 [2024-07-22 13:03:26.707021] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.535 2024/07/22 13:03:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.535 [2024-07-22 13:03:26.722267] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.535 [2024-07-22 13:03:26.722315] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.535 2024/07/22 13:03:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.535 [2024-07-22 13:03:26.736235] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.535 [2024-07-22 13:03:26.736281] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.535 2024/07/22 13:03:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.535 [2024-07-22 13:03:26.752705] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.536 [2024-07-22 13:03:26.752764] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.536 2024/07/22 13:03:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.536 [2024-07-22 13:03:26.768341] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.536 [2024-07-22 13:03:26.768389] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.536 2024/07/22 13:03:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.536 [2024-07-22 13:03:26.778730] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.536 [2024-07-22 13:03:26.778793] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.536 2024/07/22 13:03:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.536 [2024-07-22 13:03:26.792819] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.536 [2024-07-22 13:03:26.792881] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.536 2024/07/22 13:03:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.536 [2024-07-22 13:03:26.809413] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.536 [2024-07-22 13:03:26.809460] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.536 2024/07/22 13:03:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.536 [2024-07-22 13:03:26.825860] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.536 [2024-07-22 13:03:26.825909] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.536 2024/07/22 13:03:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.536 [2024-07-22 13:03:26.841850] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.536 [2024-07-22 13:03:26.841897] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.536 2024/07/22 13:03:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.536 [2024-07-22 13:03:26.858245] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.536 [2024-07-22 13:03:26.858291] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.536 2024/07/22 13:03:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.536 [2024-07-22 13:03:26.874822] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.536 [2024-07-22 13:03:26.874886] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.536 2024/07/22 13:03:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.536 [2024-07-22 13:03:26.891478] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.536 [2024-07-22 13:03:26.891525] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.536 2024/07/22 13:03:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.536 [2024-07-22 13:03:26.908031] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.536 [2024-07-22 13:03:26.908078] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.536 2024/07/22 13:03:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.536 [2024-07-22 13:03:26.924994] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.536 [2024-07-22 13:03:26.925041] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.536 2024/07/22 13:03:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.536 [2024-07-22 13:03:26.941315] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.536 [2024-07-22 13:03:26.941362] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.536 2024/07/22 13:03:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.796 [2024-07-22 13:03:26.960238] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.796 [2024-07-22 13:03:26.960316] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.796 2024/07/22 13:03:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.796 [2024-07-22 13:03:26.974695] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.796 [2024-07-22 13:03:26.974744] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.796 2024/07/22 13:03:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.796 [2024-07-22 13:03:26.991972] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.796 [2024-07-22 13:03:26.992018] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.796 2024/07/22 13:03:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.796 [2024-07-22 13:03:27.005403] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.796 [2024-07-22 13:03:27.005450] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.796 2024/07/22 13:03:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.796 [2024-07-22 13:03:27.021460] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.796 [2024-07-22 13:03:27.021507] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.796 2024/07/22 13:03:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.796 [2024-07-22 13:03:27.038106] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.796 [2024-07-22 13:03:27.038178] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.796 2024/07/22 13:03:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.796 [2024-07-22 13:03:27.055487] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.796 [2024-07-22 13:03:27.055534] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.796 2024/07/22 13:03:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.796 [2024-07-22 13:03:27.074069] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.796 [2024-07-22 13:03:27.074132] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.796 2024/07/22 13:03:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.796 [2024-07-22 13:03:27.088015] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.796 [2024-07-22 13:03:27.088062] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.796 2024/07/22 13:03:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.796 [2024-07-22 13:03:27.105085] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.796 [2024-07-22 13:03:27.105134] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.796 2024/07/22 13:03:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.796 [2024-07-22 13:03:27.119717] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.796 [2024-07-22 13:03:27.119764] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.796 2024/07/22 13:03:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.796 [2024-07-22 13:03:27.137014] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.796 [2024-07-22 13:03:27.137064] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.796 2024/07/22 13:03:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.796 [2024-07-22 13:03:27.151608] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.796 [2024-07-22 13:03:27.151655] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.796 2024/07/22 13:03:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.796 [2024-07-22 13:03:27.166686] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.796 [2024-07-22 13:03:27.166737] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.796 2024/07/22 13:03:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.796 [2024-07-22 13:03:27.182670] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.796 [2024-07-22 13:03:27.182706] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.796 2024/07/22 13:03:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.796 [2024-07-22 13:03:27.198178] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.796 [2024-07-22 13:03:27.198223] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.796 2024/07/22 13:03:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.796 [2024-07-22 13:03:27.213945] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.796 [2024-07-22 13:03:27.214009] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.055 2024/07/22 13:03:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:08.055 [2024-07-22 13:03:27.230646] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.055 [2024-07-22 13:03:27.230682] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.055 2024/07/22 13:03:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:08.055 [2024-07-22 13:03:27.248543] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.055 [2024-07-22 13:03:27.248577] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.055 2024/07/22 13:03:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:08.055 [2024-07-22 13:03:27.264065] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.055 [2024-07-22 13:03:27.264111] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.055 2024/07/22 13:03:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:08.055 [2024-07-22 13:03:27.280164] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.055 [2024-07-22 13:03:27.280222] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.055 2024/07/22 13:03:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:08.055 [2024-07-22 13:03:27.291479] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.056 [2024-07-22 13:03:27.291525] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.056 2024/07/22 13:03:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:08.056 [2024-07-22 13:03:27.307901] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.056 [2024-07-22 13:03:27.307964] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.056 2024/07/22 13:03:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:08.056 [2024-07-22 13:03:27.323418] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.056 [2024-07-22 13:03:27.323466] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.056 2024/07/22 13:03:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:08.056 [2024-07-22 13:03:27.336009] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.056 [2024-07-22 13:03:27.336056] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.056 2024/07/22 13:03:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:08.056 [2024-07-22 13:03:27.347036] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.056 [2024-07-22 13:03:27.347082] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.056 2024/07/22 13:03:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:08.056 [2024-07-22 13:03:27.364676] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.056 [2024-07-22 13:03:27.364723] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.056 2024/07/22 13:03:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:08.056 [2024-07-22 13:03:27.378989] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.056 [2024-07-22 13:03:27.379039] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.056 2024/07/22 13:03:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:08.056 [2024-07-22 13:03:27.394899] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.056 [2024-07-22 13:03:27.394945] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.056 2024/07/22 13:03:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:08.056 [2024-07-22 13:03:27.411197] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.056 [2024-07-22 13:03:27.411242] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.056 2024/07/22 13:03:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:08.056 [2024-07-22 13:03:27.427828] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.056 [2024-07-22 13:03:27.427875] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.056 2024/07/22 13:03:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:08.056 [2024-07-22 13:03:27.444746] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.056 [2024-07-22 13:03:27.444793] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.056 2024/07/22 13:03:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:08.056 [2024-07-22 13:03:27.461605] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.056 [2024-07-22 13:03:27.461653] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.056 2024/07/22 13:03:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:08.315 [2024-07-22 13:03:27.477955] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.315 [2024-07-22 13:03:27.478001] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.315 2024/07/22 13:03:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:08.315 [2024-07-22 13:03:27.495571] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.315 [2024-07-22 13:03:27.495618] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.315 2024/07/22 13:03:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:08.315 [2024-07-22 13:03:27.512289] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.315 [2024-07-22 13:03:27.512335] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.315 2024/07/22 13:03:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:08.315 [2024-07-22 13:03:27.529744] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.315 [2024-07-22 13:03:27.529790] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.315 2024/07/22 13:03:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:08.315 [2024-07-22 13:03:27.544637] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.315 [2024-07-22 13:03:27.544684] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.315 2024/07/22 13:03:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:08.315 [2024-07-22 13:03:27.561349] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.315 [2024-07-22 13:03:27.561397] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.315 2024/07/22 13:03:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:08.315 [2024-07-22 13:03:27.577318] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.315 [2024-07-22 13:03:27.577363] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.315 2024/07/22 13:03:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:08.315 [2024-07-22 13:03:27.588570] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.315 [2024-07-22 13:03:27.588616] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.316 2024/07/22 13:03:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:08.316 [2024-07-22 13:03:27.604628] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.316 [2024-07-22 13:03:27.604675] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.316 2024/07/22 13:03:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:08.316 [2024-07-22 13:03:27.621097] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.316 [2024-07-22 13:03:27.621152] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.316 2024/07/22 13:03:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:08.316 [2024-07-22 13:03:27.637956] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.316 [2024-07-22 13:03:27.638003] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.316 2024/07/22 13:03:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:08.316 [2024-07-22 13:03:27.654495] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.316 [2024-07-22 13:03:27.654531] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.316 2024/07/22 13:03:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:08.316 [2024-07-22 13:03:27.665832] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.316 [2024-07-22 13:03:27.665878] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.316 2024/07/22 13:03:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:08.316 [2024-07-22 13:03:27.682130] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.316 [2024-07-22 13:03:27.682186] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.316 2024/07/22 13:03:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:08.316 [2024-07-22 13:03:27.699389] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.316 [2024-07-22 13:03:27.699435] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.316 2024/07/22 13:03:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:08.316 [2024-07-22 13:03:27.715603] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.316 [2024-07-22 13:03:27.715649] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.316 2024/07/22 13:03:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:08.316 [2024-07-22 13:03:27.732568] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.316 [2024-07-22 13:03:27.732617] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.316 2024/07/22 13:03:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:08.588 [2024-07-22 13:03:27.747080] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.588 [2024-07-22 13:03:27.747126] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.588 2024/07/22 13:03:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:08.588 [2024-07-22 13:03:27.763773] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.588 [2024-07-22 13:03:27.763822] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.588 2024/07/22 13:03:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:08.588 [2024-07-22 13:03:27.778955] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.588 [2024-07-22 13:03:27.779002] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.588 2024/07/22 13:03:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:08.588 [2024-07-22 13:03:27.794710] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.588 [2024-07-22 13:03:27.794773] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.588 2024/07/22 13:03:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:08.588 [2024-07-22 13:03:27.811149] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.588 [2024-07-22 13:03:27.811207] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.588 2024/07/22 13:03:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:08.588 [2024-07-22 13:03:27.828929] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.588 [2024-07-22 13:03:27.828976] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.588 2024/07/22 13:03:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:08.588 [2024-07-22 13:03:27.843929] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.588 [2024-07-22 13:03:27.843975] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.588 2024/07/22 13:03:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:08.588 [2024-07-22 13:03:27.859475] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.588 [2024-07-22 13:03:27.859521] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.588 2024/07/22 13:03:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:08.588 [2024-07-22 13:03:27.877209] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.588 [2024-07-22 13:03:27.877260] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.588 2024/07/22 13:03:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:08.588 [2024-07-22 13:03:27.891979] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.588 [2024-07-22 13:03:27.892010] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.588 2024/07/22 13:03:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:08.588 [2024-07-22 13:03:27.906841] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.588 [2024-07-22 13:03:27.906872] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.588 2024/07/22 13:03:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:08.588 [2024-07-22 13:03:27.922650] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.588 [2024-07-22 13:03:27.922681] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.588 2024/07/22 13:03:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:08.588 [2024-07-22 13:03:27.938436] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.588 [2024-07-22 13:03:27.938490] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.588 2024/07/22 13:03:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:08.588 [2024-07-22 13:03:27.947536] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.588 [2024-07-22 13:03:27.947569] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.588 2024/07/22 13:03:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:08.588 [2024-07-22 13:03:27.963367] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.588 [2024-07-22 13:03:27.963398] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.588 2024/07/22 13:03:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:08.588 [2024-07-22 13:03:27.973282] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.588 [2024-07-22 13:03:27.973314] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.588 2024/07/22 13:03:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:08.588 [2024-07-22 13:03:27.988077] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.588 [2024-07-22 13:03:27.988109] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.588 2024/07/22 13:03:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:08.588 [2024-07-22 13:03:27.997968] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.588 [2024-07-22 13:03:27.998000] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.860 2024/07/22 13:03:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:08.860 [2024-07-22 13:03:28.012940] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.860 [2024-07-22 13:03:28.012984] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.860 2024/07/22 13:03:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:08.860 [2024-07-22 13:03:28.024479] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.860 [2024-07-22 13:03:28.024525] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.860 2024/07/22 13:03:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:08.860 [2024-07-22 13:03:28.040835] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.860 [2024-07-22 13:03:28.040884] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.860 2024/07/22 13:03:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:08.860 [2024-07-22 13:03:28.056002] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.860 [2024-07-22 13:03:28.056049] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.860 2024/07/22 13:03:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:08.860 [2024-07-22 13:03:28.072299] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.861 [2024-07-22 13:03:28.072344] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.861 2024/07/22 13:03:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:08.861 [2024-07-22 13:03:28.087283] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.861 [2024-07-22 13:03:28.087329] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.861 2024/07/22 13:03:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:08.861 [2024-07-22 13:03:28.103292] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.861 [2024-07-22 13:03:28.103338] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.861 2024/07/22 13:03:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:08.861 [2024-07-22 13:03:28.120040] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.861 [2024-07-22 13:03:28.120087] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.861 2024/07/22 13:03:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:08.861 [2024-07-22 13:03:28.136158] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.861 [2024-07-22 13:03:28.136203] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.861 2024/07/22 13:03:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:08.861 [2024-07-22 13:03:28.153709] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.861 [2024-07-22 13:03:28.153756] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.861 2024/07/22 13:03:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:08.861 [2024-07-22 13:03:28.170864] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.861 [2024-07-22 13:03:28.170912] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.861 2024/07/22 13:03:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:08.861 [2024-07-22 13:03:28.188479] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.861 [2024-07-22 13:03:28.188531] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.861 2024/07/22 13:03:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:08.861 [2024-07-22 13:03:28.202715] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.861 [2024-07-22 13:03:28.202779] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.861 2024/07/22 13:03:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:08.861 [2024-07-22 13:03:28.219586] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.861 [2024-07-22 13:03:28.219635] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.861 2024/07/22 13:03:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:08.861 [2024-07-22 13:03:28.234817] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.861 [2024-07-22 13:03:28.234855] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.861 2024/07/22 13:03:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:08.861 [2024-07-22 13:03:28.251822] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.861 [2024-07-22 13:03:28.251868] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.861 2024/07/22 13:03:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:08.861 [2024-07-22 13:03:28.267509] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.861 [2024-07-22 13:03:28.267558] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.861 2024/07/22 13:03:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:09.120 [2024-07-22 13:03:28.285807] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.120 [2024-07-22 13:03:28.285858] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.120 2024/07/22 13:03:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:09.120 [2024-07-22 13:03:28.300010] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.120 [2024-07-22 13:03:28.300057] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.120 2024/07/22 13:03:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:09.120 [2024-07-22 13:03:28.315353] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.120 [2024-07-22 13:03:28.315398] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.120 2024/07/22 13:03:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:09.120 [2024-07-22 13:03:28.326453] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.120 [2024-07-22 13:03:28.326528] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.120 2024/07/22 13:03:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:09.120 [2024-07-22 13:03:28.342954] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.120 [2024-07-22 13:03:28.343000] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.120 2024/07/22 13:03:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:09.120 [2024-07-22 13:03:28.359461] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.120 [2024-07-22 13:03:28.359509] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.120 2024/07/22 13:03:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:09.120 [2024-07-22 13:03:28.374860] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.120 [2024-07-22 13:03:28.374908] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.120 2024/07/22 13:03:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:09.120 [2024-07-22 13:03:28.389593] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.120 [2024-07-22 13:03:28.389640] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.121 2024/07/22 13:03:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:09.121 [2024-07-22 13:03:28.405157] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.121 [2024-07-22 13:03:28.405203] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.121 2024/07/22 13:03:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:09.121 [2024-07-22 13:03:28.422643] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.121 [2024-07-22 13:03:28.422691] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.121 2024/07/22 13:03:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:09.121 [2024-07-22 13:03:28.438489] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.121 [2024-07-22 13:03:28.438541] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.121 2024/07/22 13:03:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:09.121 [2024-07-22 13:03:28.455346] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.121 [2024-07-22 13:03:28.455395] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.121 2024/07/22 13:03:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:09.121 [2024-07-22 13:03:28.470240] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.121 [2024-07-22 13:03:28.470286] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.121 2024/07/22 13:03:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:09.121 [2024-07-22 13:03:28.485584] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.121 [2024-07-22 13:03:28.485631] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.121 2024/07/22 13:03:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:09.121 [2024-07-22 13:03:28.502402] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.121 [2024-07-22 13:03:28.502449] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.121 2024/07/22 13:03:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:09.121 [2024-07-22 13:03:28.519909] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.121 [2024-07-22 13:03:28.519957] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.121 2024/07/22 13:03:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:09.121 [2024-07-22 13:03:28.536926] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.121 [2024-07-22 13:03:28.536973] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.121 2024/07/22 13:03:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:09.380 [2024-07-22 13:03:28.552106] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.380 [2024-07-22 13:03:28.552161] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.380 2024/07/22 13:03:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:09.380 [2024-07-22 13:03:28.570260] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.380 [2024-07-22 13:03:28.570306] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.380 2024/07/22 13:03:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:09.380 [2024-07-22 13:03:28.584543] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.380 [2024-07-22 13:03:28.584606] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.380 2024/07/22 13:03:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:09.380 [2024-07-22 13:03:28.594447] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.380 [2024-07-22 13:03:28.594536] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.380 2024/07/22 13:03:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:09.380 [2024-07-22 13:03:28.608200] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.380 [2024-07-22 13:03:28.608258] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.380 2024/07/22 13:03:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:09.380 [2024-07-22 13:03:28.624786] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.380 [2024-07-22 13:03:28.624833] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.380 2024/07/22 13:03:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:09.380 [2024-07-22 13:03:28.641451] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.380 [2024-07-22 13:03:28.641498] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.380 2024/07/22 13:03:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:09.380 [2024-07-22 13:03:28.659270] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.380 [2024-07-22 13:03:28.659316] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.380 2024/07/22 13:03:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:09.380 [2024-07-22 13:03:28.673821] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.380 [2024-07-22 13:03:28.673867] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.380 2024/07/22 13:03:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:09.380 [2024-07-22 13:03:28.690577] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.380 [2024-07-22 13:03:28.690626] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.380 2024/07/22 13:03:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:09.380 [2024-07-22 13:03:28.706769] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.380 [2024-07-22 13:03:28.706817] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.380 2024/07/22 13:03:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:09.380 [2024-07-22 13:03:28.724538] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.380 [2024-07-22 13:03:28.724584] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.380 2024/07/22 13:03:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:09.380 [2024-07-22 13:03:28.739788] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.380 [2024-07-22 13:03:28.739834] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.380 2024/07/22 13:03:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:09.380 [2024-07-22 13:03:28.751936] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.380 [2024-07-22 13:03:28.751982] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.380 2024/07/22 13:03:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:09.380 [2024-07-22 13:03:28.768741] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.381 [2024-07-22 13:03:28.768773] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.381 2024/07/22 13:03:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:09.381 [2024-07-22 13:03:28.783209] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.381 [2024-07-22 13:03:28.783270] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.381 2024/07/22 13:03:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:09.381 [2024-07-22 13:03:28.800760] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.381 [2024-07-22 13:03:28.800832] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.640 2024/07/22 13:03:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:09.640 [2024-07-22 13:03:28.815084] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.640 [2024-07-22 13:03:28.815178] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.640 2024/07/22 13:03:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:09.640 [2024-07-22 13:03:28.830333] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.640 [2024-07-22 13:03:28.830397] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.640 2024/07/22 13:03:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:09.640 [2024-07-22 13:03:28.839645] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.640 [2024-07-22 13:03:28.839679] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.640 2024/07/22 13:03:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:09.640 [2024-07-22 13:03:28.854857] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.640 [2024-07-22 13:03:28.854908] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.640 2024/07/22 13:03:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:09.640 [2024-07-22 13:03:28.870733] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.640 [2024-07-22 13:03:28.870769] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.640 2024/07/22 13:03:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:09.640 [2024-07-22 13:03:28.887448] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.640 [2024-07-22 13:03:28.887496] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.640 2024/07/22 13:03:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:09.640 [2024-07-22 13:03:28.902321] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.640 [2024-07-22 13:03:28.902368] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.640 2024/07/22 13:03:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:09.640 [2024-07-22 13:03:28.919584] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.640 [2024-07-22 13:03:28.919631] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.640 2024/07/22 13:03:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:09.640 [2024-07-22 13:03:28.934153] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.640 [2024-07-22 13:03:28.934200] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.640 2024/07/22 13:03:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:09.640 [2024-07-22 13:03:28.951282] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.640 [2024-07-22 13:03:28.951328] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.640 2024/07/22 13:03:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:09.640 [2024-07-22 13:03:28.966853] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.640 [2024-07-22 13:03:28.966902] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.640 2024/07/22 13:03:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:09.640 [2024-07-22 13:03:28.979029] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.640 [2024-07-22 13:03:28.979077] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.640 2024/07/22 13:03:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:09.640 [2024-07-22 13:03:28.994379] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.640 [2024-07-22 13:03:28.994426] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.640 2024/07/22 13:03:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:09.640 [2024-07-22 13:03:29.005634] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.640 [2024-07-22 13:03:29.005682] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.640 2024/07/22 13:03:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:09.640 [2024-07-22 13:03:29.022715] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.640 [2024-07-22 13:03:29.022754] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.640 2024/07/22 13:03:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:09.640 [2024-07-22 13:03:29.037804] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.640 [2024-07-22 13:03:29.037835] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.640 2024/07/22 13:03:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:09.640 [2024-07-22 13:03:29.054073] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.640 [2024-07-22 13:03:29.054104] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.640 2024/07/22 13:03:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:09.900 [2024-07-22 13:03:29.070629] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.900 [2024-07-22 13:03:29.070663] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.900 2024/07/22 13:03:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:09.900 [2024-07-22 13:03:29.087362] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.900 [2024-07-22 13:03:29.087392] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.900 2024/07/22 13:03:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:09.900 [2024-07-22 13:03:29.103893] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.900 [2024-07-22 13:03:29.103925] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.900 2024/07/22 13:03:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:09.900 [2024-07-22 13:03:29.120838] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.900 [2024-07-22 13:03:29.120870] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.900 2024/07/22 13:03:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:09.900 [2024-07-22 13:03:29.136444] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.900 [2024-07-22 13:03:29.136484] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.900 2024/07/22 13:03:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:09.900 [2024-07-22 13:03:29.146016] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.900 [2024-07-22 13:03:29.146046] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.900 2024/07/22 13:03:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:09.900 [2024-07-22 13:03:29.161921] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.900 [2024-07-22 13:03:29.161953] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.900 2024/07/22 13:03:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:09.900 [2024-07-22 13:03:29.171694] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.900 [2024-07-22 13:03:29.171724] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.900 2024/07/22 13:03:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:09.900 [2024-07-22 13:03:29.186135] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.900 [2024-07-22 13:03:29.186209] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.900 2024/07/22 13:03:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:09.900 [2024-07-22 13:03:29.201469] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.900 [2024-07-22 13:03:29.201516] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.900 2024/07/22 13:03:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:09.900 [2024-07-22 13:03:29.213245] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.900 [2024-07-22 13:03:29.213291] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.900 2024/07/22 13:03:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:09.900 [2024-07-22 13:03:29.229855] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.900 [2024-07-22 13:03:29.229905] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.900 2024/07/22 13:03:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:09.900 [2024-07-22 13:03:29.244837] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.900 [2024-07-22 13:03:29.244886] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.900 2024/07/22 13:03:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:09.900 [2024-07-22 13:03:29.262289] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.900 [2024-07-22 13:03:29.262356] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.900 2024/07/22 13:03:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:09.900 [2024-07-22 13:03:29.277233] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.900 [2024-07-22 13:03:29.277280] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.900 2024/07/22 13:03:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:09.900 [2024-07-22 13:03:29.296232] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.900 [2024-07-22 13:03:29.296292] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.901 2024/07/22 13:03:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:09.901 [2024-07-22 13:03:29.310670] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.901 [2024-07-22 13:03:29.310705] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.901 2024/07/22 13:03:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:10.160 [2024-07-22 13:03:29.326590] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.160 [2024-07-22 13:03:29.326628] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.160 2024/07/22 13:03:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:10.160 [2024-07-22 13:03:29.345320] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.160 [2024-07-22 13:03:29.345367] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.160 2024/07/22 13:03:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:10.160 [2024-07-22 13:03:29.359317] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.160 [2024-07-22 13:03:29.359363] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.160 2024/07/22 13:03:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:10.160 [2024-07-22 13:03:29.376571] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.160 [2024-07-22 13:03:29.376619] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.160 2024/07/22 13:03:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:10.160 [2024-07-22 13:03:29.391144] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.160 [2024-07-22 13:03:29.391217] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.160 2024/07/22 13:03:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:10.160 [2024-07-22 13:03:29.408508] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.160 [2024-07-22 13:03:29.408588] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.160 2024/07/22 13:03:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:10.160 [2024-07-22 13:03:29.424471] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.160 [2024-07-22 13:03:29.424504] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.160 2024/07/22 13:03:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:10.160 [2024-07-22 13:03:29.441070] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.160 [2024-07-22 13:03:29.441119] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.160 2024/07/22 13:03:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:10.160 [2024-07-22 13:03:29.457333] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.160 [2024-07-22 13:03:29.457379] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.160 2024/07/22 13:03:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:10.160 [2024-07-22 13:03:29.475459] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.160 [2024-07-22 13:03:29.475507] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.161 2024/07/22 13:03:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:10.161 [2024-07-22 13:03:29.489435] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.161 [2024-07-22 13:03:29.489484] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.161 2024/07/22 13:03:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:10.161 [2024-07-22 13:03:29.503300] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.161 [2024-07-22 13:03:29.503346] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.161 2024/07/22 13:03:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:10.161 [2024-07-22 13:03:29.519435] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.161 [2024-07-22 13:03:29.519484] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.161 2024/07/22 13:03:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:10.161 [2024-07-22 13:03:29.535885] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.161 [2024-07-22 13:03:29.535934] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.161 2024/07/22 13:03:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:10.161 00:41:10.161 Latency(us) 00:41:10.161 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:10.161 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:41:10.161 Nvme1n1 : 5.01 12584.54 98.32 0.00 0.00 10159.35 4170.47 20614.05 00:41:10.161 =================================================================================================================== 00:41:10.161 Total : 12584.54 98.32 0.00 0.00 10159.35 4170.47 20614.05 00:41:10.161 [2024-07-22 13:03:29.547970] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.161 [2024-07-22 13:03:29.548015] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.161 2024/07/22 13:03:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:10.161 [2024-07-22 13:03:29.559947] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.161 [2024-07-22 13:03:29.559991] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.161 2024/07/22 13:03:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:10.161 [2024-07-22 13:03:29.571969] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.161 [2024-07-22 13:03:29.572020] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.161 2024/07/22 13:03:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:10.420 [2024-07-22 13:03:29.583991] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.420 [2024-07-22 13:03:29.584059] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.420 2024/07/22 13:03:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:10.420 [2024-07-22 13:03:29.595976] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.420 [2024-07-22 13:03:29.596026] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.420 2024/07/22 13:03:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:10.420 [2024-07-22 13:03:29.607987] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.420 [2024-07-22 13:03:29.608038] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.420 2024/07/22 13:03:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:10.420 [2024-07-22 13:03:29.619991] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.420 [2024-07-22 13:03:29.620040] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.420 2024/07/22 13:03:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:10.420 [2024-07-22 13:03:29.631989] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.420 [2024-07-22 13:03:29.632039] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.420 2024/07/22 13:03:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:10.420 [2024-07-22 13:03:29.643999] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.420 [2024-07-22 13:03:29.644070] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.420 2024/07/22 13:03:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:10.420 [2024-07-22 13:03:29.655995] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.420 [2024-07-22 13:03:29.656045] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.420 2024/07/22 13:03:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:10.420 [2024-07-22 13:03:29.667999] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.420 [2024-07-22 13:03:29.668049] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.420 2024/07/22 13:03:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:10.420 [2024-07-22 13:03:29.679991] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.420 [2024-07-22 13:03:29.680038] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.420 2024/07/22 13:03:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:10.420 [2024-07-22 13:03:29.691986] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.420 [2024-07-22 13:03:29.692031] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.421 2024/07/22 13:03:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:10.421 [2024-07-22 13:03:29.704009] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.421 [2024-07-22 13:03:29.704068] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.421 2024/07/22 13:03:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:10.421 [2024-07-22 13:03:29.715998] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.421 [2024-07-22 13:03:29.716047] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.421 2024/07/22 13:03:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:10.421 [2024-07-22 13:03:29.728027] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.421 [2024-07-22 13:03:29.728079] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.421 2024/07/22 13:03:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:10.421 [2024-07-22 13:03:29.740023] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.421 [2024-07-22 13:03:29.740069] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.421 2024/07/22 13:03:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:10.421 [2024-07-22 13:03:29.752005] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.421 [2024-07-22 13:03:29.752049] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.421 2024/07/22 13:03:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:10.421 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (85694) - No such process 00:41:10.421 13:03:29 -- target/zcopy.sh@49 -- # wait 85694 00:41:10.421 13:03:29 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:41:10.421 13:03:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:41:10.421 13:03:29 -- common/autotest_common.sh@10 -- # set +x 00:41:10.421 13:03:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:41:10.421 13:03:29 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:41:10.421 13:03:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:41:10.421 13:03:29 -- common/autotest_common.sh@10 -- # set +x 00:41:10.421 delay0 00:41:10.421 13:03:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:41:10.421 13:03:29 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:41:10.421 13:03:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:41:10.421 13:03:29 -- common/autotest_common.sh@10 -- # set +x 00:41:10.421 13:03:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:41:10.421 13:03:29 -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:41:10.680 [2024-07-22 13:03:29.950304] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:41:17.243 Initializing NVMe Controllers 00:41:17.243 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:41:17.243 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:41:17.243 Initialization complete. Launching workers. 00:41:17.243 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 74 00:41:17.243 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 361, failed to submit 33 00:41:17.243 success 155, unsuccess 206, failed 0 00:41:17.243 13:03:36 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:41:17.243 13:03:36 -- target/zcopy.sh@60 -- # nvmftestfini 00:41:17.243 13:03:36 -- nvmf/common.sh@476 -- # nvmfcleanup 00:41:17.243 13:03:36 -- nvmf/common.sh@116 -- # sync 00:41:17.243 13:03:36 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:41:17.243 13:03:36 -- nvmf/common.sh@119 -- # set +e 00:41:17.244 13:03:36 -- nvmf/common.sh@120 -- # for i in {1..20} 00:41:17.244 13:03:36 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:41:17.244 rmmod nvme_tcp 00:41:17.244 rmmod nvme_fabrics 00:41:17.244 rmmod nvme_keyring 00:41:17.244 13:03:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:41:17.244 13:03:36 -- nvmf/common.sh@123 -- # set -e 00:41:17.244 13:03:36 -- nvmf/common.sh@124 -- # return 0 00:41:17.244 13:03:36 -- nvmf/common.sh@477 -- # '[' -n 85532 ']' 00:41:17.244 13:03:36 -- nvmf/common.sh@478 -- # killprocess 85532 00:41:17.244 13:03:36 -- common/autotest_common.sh@926 -- # '[' -z 85532 ']' 00:41:17.244 13:03:36 -- common/autotest_common.sh@930 -- # kill -0 85532 00:41:17.244 13:03:36 -- common/autotest_common.sh@931 -- # uname 00:41:17.244 13:03:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:41:17.244 13:03:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 85532 00:41:17.244 13:03:36 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:41:17.244 killing process with pid 85532 00:41:17.244 13:03:36 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:41:17.244 13:03:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 85532' 00:41:17.244 13:03:36 -- common/autotest_common.sh@945 -- # kill 85532 00:41:17.244 13:03:36 -- common/autotest_common.sh@950 -- # wait 85532 00:41:17.244 13:03:36 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:41:17.244 13:03:36 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:41:17.244 13:03:36 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:41:17.244 13:03:36 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:41:17.244 13:03:36 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:41:17.244 13:03:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:17.244 13:03:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:41:17.244 13:03:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:17.244 13:03:36 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:41:17.244 00:41:17.244 real 0m24.613s 00:41:17.244 user 0m39.967s 00:41:17.244 sys 0m6.540s 00:41:17.244 13:03:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:41:17.244 13:03:36 -- common/autotest_common.sh@10 -- # set +x 00:41:17.244 ************************************ 00:41:17.244 END TEST nvmf_zcopy 00:41:17.244 ************************************ 00:41:17.244 13:03:36 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:41:17.244 13:03:36 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:41:17.244 13:03:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:41:17.244 13:03:36 -- common/autotest_common.sh@10 -- # set +x 00:41:17.244 ************************************ 00:41:17.244 START TEST nvmf_nmic 00:41:17.244 ************************************ 00:41:17.244 13:03:36 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:41:17.244 * Looking for test storage... 00:41:17.244 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:41:17.244 13:03:36 -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:41:17.244 13:03:36 -- nvmf/common.sh@7 -- # uname -s 00:41:17.244 13:03:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:17.244 13:03:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:17.244 13:03:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:17.244 13:03:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:17.244 13:03:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:17.244 13:03:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:17.244 13:03:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:17.244 13:03:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:17.244 13:03:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:17.244 13:03:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:17.244 13:03:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:864ae4a3-cd96-439b-8ef2-6dab4d992115 00:41:17.244 13:03:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=864ae4a3-cd96-439b-8ef2-6dab4d992115 00:41:17.244 13:03:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:17.244 13:03:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:17.244 13:03:36 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:41:17.244 13:03:36 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:41:17.244 13:03:36 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:17.244 13:03:36 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:17.244 13:03:36 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:17.244 13:03:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:17.244 13:03:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:17.244 13:03:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:17.244 13:03:36 -- paths/export.sh@5 -- # export PATH 00:41:17.244 13:03:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:17.244 13:03:36 -- nvmf/common.sh@46 -- # : 0 00:41:17.244 13:03:36 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:41:17.244 13:03:36 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:41:17.244 13:03:36 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:41:17.244 13:03:36 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:17.244 13:03:36 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:17.244 13:03:36 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:41:17.244 13:03:36 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:41:17.244 13:03:36 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:41:17.244 13:03:36 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:41:17.244 13:03:36 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:41:17.244 13:03:36 -- target/nmic.sh@14 -- # nvmftestinit 00:41:17.244 13:03:36 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:41:17.244 13:03:36 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:17.244 13:03:36 -- nvmf/common.sh@436 -- # prepare_net_devs 00:41:17.244 13:03:36 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:41:17.244 13:03:36 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:41:17.244 13:03:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:17.244 13:03:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:41:17.244 13:03:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:17.244 13:03:36 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:41:17.244 13:03:36 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:41:17.244 13:03:36 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:41:17.244 13:03:36 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:41:17.244 13:03:36 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:41:17.244 13:03:36 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:41:17.244 13:03:36 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:17.244 13:03:36 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:17.244 13:03:36 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:41:17.244 13:03:36 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:41:17.244 13:03:36 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:41:17.244 13:03:36 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:41:17.244 13:03:36 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:41:17.244 13:03:36 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:17.244 13:03:36 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:41:17.244 13:03:36 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:41:17.244 13:03:36 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:41:17.244 13:03:36 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:41:17.244 13:03:36 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:41:17.244 13:03:36 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:41:17.244 Cannot find device "nvmf_tgt_br" 00:41:17.244 13:03:36 -- nvmf/common.sh@154 -- # true 00:41:17.244 13:03:36 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:41:17.244 Cannot find device "nvmf_tgt_br2" 00:41:17.244 13:03:36 -- nvmf/common.sh@155 -- # true 00:41:17.244 13:03:36 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:41:17.244 13:03:36 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:41:17.244 Cannot find device "nvmf_tgt_br" 00:41:17.244 13:03:36 -- nvmf/common.sh@157 -- # true 00:41:17.244 13:03:36 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:41:17.244 Cannot find device "nvmf_tgt_br2" 00:41:17.244 13:03:36 -- nvmf/common.sh@158 -- # true 00:41:17.244 13:03:36 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:41:17.244 13:03:36 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:41:17.503 13:03:36 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:41:17.503 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:41:17.503 13:03:36 -- nvmf/common.sh@161 -- # true 00:41:17.503 13:03:36 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:41:17.503 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:41:17.503 13:03:36 -- nvmf/common.sh@162 -- # true 00:41:17.503 13:03:36 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:41:17.503 13:03:36 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:41:17.503 13:03:36 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:41:17.503 13:03:36 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:41:17.503 13:03:36 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:41:17.503 13:03:36 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:41:17.503 13:03:36 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:41:17.503 13:03:36 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:41:17.503 13:03:36 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:41:17.503 13:03:36 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:41:17.503 13:03:36 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:41:17.503 13:03:36 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:41:17.503 13:03:36 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:41:17.503 13:03:36 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:41:17.503 13:03:36 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:41:17.503 13:03:36 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:41:17.503 13:03:36 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:41:17.503 13:03:36 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:41:17.503 13:03:36 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:41:17.503 13:03:36 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:41:17.503 13:03:36 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:41:17.503 13:03:36 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:41:17.503 13:03:36 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:41:17.503 13:03:36 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:41:17.503 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:17.503 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.093 ms 00:41:17.503 00:41:17.503 --- 10.0.0.2 ping statistics --- 00:41:17.503 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:17.503 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:41:17.503 13:03:36 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:41:17.503 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:41:17.503 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:41:17.503 00:41:17.503 --- 10.0.0.3 ping statistics --- 00:41:17.503 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:17.503 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:41:17.503 13:03:36 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:41:17.503 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:17.503 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:41:17.503 00:41:17.503 --- 10.0.0.1 ping statistics --- 00:41:17.503 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:17.503 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:41:17.503 13:03:36 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:17.503 13:03:36 -- nvmf/common.sh@421 -- # return 0 00:41:17.503 13:03:36 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:41:17.503 13:03:36 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:17.503 13:03:36 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:41:17.503 13:03:36 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:41:17.503 13:03:36 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:17.503 13:03:36 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:41:17.503 13:03:36 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:41:17.503 13:03:36 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:41:17.503 13:03:36 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:41:17.503 13:03:36 -- common/autotest_common.sh@712 -- # xtrace_disable 00:41:17.503 13:03:36 -- common/autotest_common.sh@10 -- # set +x 00:41:17.503 13:03:36 -- nvmf/common.sh@469 -- # nvmfpid=86011 00:41:17.503 13:03:36 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:41:17.503 13:03:36 -- nvmf/common.sh@470 -- # waitforlisten 86011 00:41:17.503 13:03:36 -- common/autotest_common.sh@819 -- # '[' -z 86011 ']' 00:41:17.503 13:03:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:17.503 13:03:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:41:17.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:17.504 13:03:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:17.504 13:03:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:41:17.504 13:03:36 -- common/autotest_common.sh@10 -- # set +x 00:41:17.762 [2024-07-22 13:03:36.955812] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:41:17.762 [2024-07-22 13:03:36.955894] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:17.762 [2024-07-22 13:03:37.097416] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:18.020 [2024-07-22 13:03:37.189985] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:41:18.020 [2024-07-22 13:03:37.190131] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:18.020 [2024-07-22 13:03:37.190144] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:18.020 [2024-07-22 13:03:37.190195] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:18.020 [2024-07-22 13:03:37.190761] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:41:18.020 [2024-07-22 13:03:37.190933] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:41:18.020 [2024-07-22 13:03:37.191151] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:41:18.020 [2024-07-22 13:03:37.191155] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:41:18.587 13:03:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:41:18.587 13:03:37 -- common/autotest_common.sh@852 -- # return 0 00:41:18.587 13:03:37 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:41:18.587 13:03:37 -- common/autotest_common.sh@718 -- # xtrace_disable 00:41:18.587 13:03:37 -- common/autotest_common.sh@10 -- # set +x 00:41:18.587 13:03:37 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:18.587 13:03:37 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:41:18.587 13:03:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:41:18.587 13:03:37 -- common/autotest_common.sh@10 -- # set +x 00:41:18.587 [2024-07-22 13:03:37.989272] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:18.846 13:03:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:41:18.846 13:03:38 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:41:18.846 13:03:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:41:18.846 13:03:38 -- common/autotest_common.sh@10 -- # set +x 00:41:18.846 Malloc0 00:41:18.846 13:03:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:41:18.846 13:03:38 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:41:18.846 13:03:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:41:18.846 13:03:38 -- common/autotest_common.sh@10 -- # set +x 00:41:18.846 13:03:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:41:18.846 13:03:38 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:41:18.846 13:03:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:41:18.846 13:03:38 -- common/autotest_common.sh@10 -- # set +x 00:41:18.846 13:03:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:41:18.846 13:03:38 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:18.846 13:03:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:41:18.846 13:03:38 -- common/autotest_common.sh@10 -- # set +x 00:41:18.846 [2024-07-22 13:03:38.059921] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:18.846 13:03:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:41:18.846 13:03:38 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:41:18.846 test case1: single bdev can't be used in multiple subsystems 00:41:18.846 13:03:38 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:41:18.846 13:03:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:41:18.846 13:03:38 -- common/autotest_common.sh@10 -- # set +x 00:41:18.846 13:03:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:41:18.846 13:03:38 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:41:18.846 13:03:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:41:18.846 13:03:38 -- common/autotest_common.sh@10 -- # set +x 00:41:18.846 13:03:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:41:18.846 13:03:38 -- target/nmic.sh@28 -- # nmic_status=0 00:41:18.846 13:03:38 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:41:18.846 13:03:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:41:18.846 13:03:38 -- common/autotest_common.sh@10 -- # set +x 00:41:18.846 [2024-07-22 13:03:38.083783] bdev.c:7940:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:41:18.846 [2024-07-22 13:03:38.083821] subsystem.c:1819:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:41:18.846 [2024-07-22 13:03:38.083833] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.846 2024/07/22 13:03:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:18.846 request: 00:41:18.846 { 00:41:18.846 "method": "nvmf_subsystem_add_ns", 00:41:18.846 "params": { 00:41:18.846 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:41:18.846 "namespace": { 00:41:18.846 "bdev_name": "Malloc0" 00:41:18.846 } 00:41:18.846 } 00:41:18.846 } 00:41:18.846 Got JSON-RPC error response 00:41:18.846 GoRPCClient: error on JSON-RPC call 00:41:18.846 13:03:38 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:41:18.846 13:03:38 -- target/nmic.sh@29 -- # nmic_status=1 00:41:18.846 13:03:38 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:41:18.846 Adding namespace failed - expected result. 00:41:18.846 13:03:38 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:41:18.846 13:03:38 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:41:18.846 test case2: host connect to nvmf target in multiple paths 00:41:18.846 13:03:38 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:41:18.846 13:03:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:41:18.846 13:03:38 -- common/autotest_common.sh@10 -- # set +x 00:41:18.846 [2024-07-22 13:03:38.095900] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:41:18.846 13:03:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:41:18.846 13:03:38 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:864ae4a3-cd96-439b-8ef2-6dab4d992115 --hostid=864ae4a3-cd96-439b-8ef2-6dab4d992115 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:41:19.105 13:03:38 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:864ae4a3-cd96-439b-8ef2-6dab4d992115 --hostid=864ae4a3-cd96-439b-8ef2-6dab4d992115 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:41:19.105 13:03:38 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:41:19.105 13:03:38 -- common/autotest_common.sh@1177 -- # local i=0 00:41:19.105 13:03:38 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:41:19.105 13:03:38 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:41:19.105 13:03:38 -- common/autotest_common.sh@1184 -- # sleep 2 00:41:21.636 13:03:40 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:41:21.636 13:03:40 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:41:21.636 13:03:40 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:41:21.636 13:03:40 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:41:21.636 13:03:40 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:41:21.636 13:03:40 -- common/autotest_common.sh@1187 -- # return 0 00:41:21.636 13:03:40 -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:41:21.636 [global] 00:41:21.636 thread=1 00:41:21.636 invalidate=1 00:41:21.636 rw=write 00:41:21.636 time_based=1 00:41:21.636 runtime=1 00:41:21.636 ioengine=libaio 00:41:21.636 direct=1 00:41:21.636 bs=4096 00:41:21.636 iodepth=1 00:41:21.636 norandommap=0 00:41:21.636 numjobs=1 00:41:21.636 00:41:21.636 verify_dump=1 00:41:21.636 verify_backlog=512 00:41:21.636 verify_state_save=0 00:41:21.636 do_verify=1 00:41:21.636 verify=crc32c-intel 00:41:21.636 [job0] 00:41:21.636 filename=/dev/nvme0n1 00:41:21.636 Could not set queue depth (nvme0n1) 00:41:21.636 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:21.636 fio-3.35 00:41:21.636 Starting 1 thread 00:41:22.640 00:41:22.640 job0: (groupid=0, jobs=1): err= 0: pid=86124: Mon Jul 22 13:03:41 2024 00:41:22.640 read: IOPS=3268, BW=12.8MiB/s (13.4MB/s)(12.8MiB/1001msec) 00:41:22.640 slat (nsec): min=12376, max=67452, avg=16440.13, stdev=4334.87 00:41:22.640 clat (usec): min=116, max=495, avg=144.08, stdev=14.92 00:41:22.640 lat (usec): min=130, max=509, avg=160.52, stdev=15.42 00:41:22.640 clat percentiles (usec): 00:41:22.640 | 1.00th=[ 122], 5.00th=[ 127], 10.00th=[ 130], 20.00th=[ 135], 00:41:22.640 | 30.00th=[ 137], 40.00th=[ 139], 50.00th=[ 141], 60.00th=[ 145], 00:41:22.640 | 70.00th=[ 149], 80.00th=[ 155], 90.00th=[ 163], 95.00th=[ 172], 00:41:22.640 | 99.00th=[ 186], 99.50th=[ 192], 99.90th=[ 210], 99.95th=[ 212], 00:41:22.640 | 99.99th=[ 494] 00:41:22.640 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:41:22.640 slat (usec): min=18, max=131, avg=24.54, stdev= 6.93 00:41:22.640 clat (usec): min=82, max=263, avg=104.32, stdev=12.72 00:41:22.640 lat (usec): min=101, max=332, avg=128.86, stdev=14.81 00:41:22.640 clat percentiles (usec): 00:41:22.640 | 1.00th=[ 87], 5.00th=[ 91], 10.00th=[ 93], 20.00th=[ 95], 00:41:22.640 | 30.00th=[ 97], 40.00th=[ 99], 50.00th=[ 101], 60.00th=[ 104], 00:41:22.640 | 70.00th=[ 108], 80.00th=[ 113], 90.00th=[ 121], 95.00th=[ 128], 00:41:22.640 | 99.00th=[ 145], 99.50th=[ 151], 99.90th=[ 200], 99.95th=[ 262], 00:41:22.640 | 99.99th=[ 265] 00:41:22.640 bw ( KiB/s): min=15200, max=15200, per=100.00%, avg=15200.00, stdev= 0.00, samples=1 00:41:22.640 iops : min= 3800, max= 3800, avg=3800.00, stdev= 0.00, samples=1 00:41:22.640 lat (usec) : 100=23.61%, 250=76.34%, 500=0.04% 00:41:22.640 cpu : usr=2.20%, sys=10.90%, ctx=6857, majf=0, minf=2 00:41:22.640 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:22.640 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:22.640 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:22.640 issued rwts: total=3272,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:22.640 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:22.640 00:41:22.640 Run status group 0 (all jobs): 00:41:22.640 READ: bw=12.8MiB/s (13.4MB/s), 12.8MiB/s-12.8MiB/s (13.4MB/s-13.4MB/s), io=12.8MiB (13.4MB), run=1001-1001msec 00:41:22.640 WRITE: bw=14.0MiB/s (14.7MB/s), 14.0MiB/s-14.0MiB/s (14.7MB/s-14.7MB/s), io=14.0MiB (14.7MB), run=1001-1001msec 00:41:22.640 00:41:22.640 Disk stats (read/write): 00:41:22.640 nvme0n1: ios=3105/3072, merge=0/0, ticks=493/379, in_queue=872, util=91.18% 00:41:22.640 13:03:41 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:41:22.640 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:41:22.640 13:03:41 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:41:22.640 13:03:41 -- common/autotest_common.sh@1198 -- # local i=0 00:41:22.640 13:03:41 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:41:22.641 13:03:41 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:41:22.641 13:03:41 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:41:22.641 13:03:41 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:41:22.641 13:03:41 -- common/autotest_common.sh@1210 -- # return 0 00:41:22.641 13:03:41 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:41:22.641 13:03:41 -- target/nmic.sh@53 -- # nvmftestfini 00:41:22.641 13:03:41 -- nvmf/common.sh@476 -- # nvmfcleanup 00:41:22.641 13:03:41 -- nvmf/common.sh@116 -- # sync 00:41:22.641 13:03:41 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:41:22.641 13:03:41 -- nvmf/common.sh@119 -- # set +e 00:41:22.641 13:03:41 -- nvmf/common.sh@120 -- # for i in {1..20} 00:41:22.641 13:03:41 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:41:22.641 rmmod nvme_tcp 00:41:22.641 rmmod nvme_fabrics 00:41:22.641 rmmod nvme_keyring 00:41:22.641 13:03:41 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:41:22.641 13:03:41 -- nvmf/common.sh@123 -- # set -e 00:41:22.641 13:03:41 -- nvmf/common.sh@124 -- # return 0 00:41:22.641 13:03:41 -- nvmf/common.sh@477 -- # '[' -n 86011 ']' 00:41:22.641 13:03:41 -- nvmf/common.sh@478 -- # killprocess 86011 00:41:22.641 13:03:41 -- common/autotest_common.sh@926 -- # '[' -z 86011 ']' 00:41:22.641 13:03:41 -- common/autotest_common.sh@930 -- # kill -0 86011 00:41:22.641 13:03:41 -- common/autotest_common.sh@931 -- # uname 00:41:22.641 13:03:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:41:22.641 13:03:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 86011 00:41:22.641 13:03:41 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:41:22.641 killing process with pid 86011 00:41:22.641 13:03:41 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:41:22.641 13:03:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 86011' 00:41:22.641 13:03:41 -- common/autotest_common.sh@945 -- # kill 86011 00:41:22.641 13:03:41 -- common/autotest_common.sh@950 -- # wait 86011 00:41:22.899 13:03:42 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:41:22.899 13:03:42 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:41:22.899 13:03:42 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:41:22.899 13:03:42 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:41:22.899 13:03:42 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:41:22.899 13:03:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:22.899 13:03:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:41:22.899 13:03:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:22.899 13:03:42 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:41:22.899 00:41:22.899 real 0m5.791s 00:41:22.899 user 0m19.665s 00:41:22.899 sys 0m1.360s 00:41:22.899 13:03:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:41:22.899 13:03:42 -- common/autotest_common.sh@10 -- # set +x 00:41:22.899 ************************************ 00:41:22.899 END TEST nvmf_nmic 00:41:22.899 ************************************ 00:41:22.899 13:03:42 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:41:22.899 13:03:42 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:41:22.899 13:03:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:41:22.899 13:03:42 -- common/autotest_common.sh@10 -- # set +x 00:41:22.899 ************************************ 00:41:22.899 START TEST nvmf_fio_target 00:41:22.899 ************************************ 00:41:22.899 13:03:42 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:41:23.158 * Looking for test storage... 00:41:23.158 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:41:23.158 13:03:42 -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:41:23.158 13:03:42 -- nvmf/common.sh@7 -- # uname -s 00:41:23.158 13:03:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:23.158 13:03:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:23.158 13:03:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:23.158 13:03:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:23.158 13:03:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:23.158 13:03:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:23.158 13:03:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:23.158 13:03:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:23.158 13:03:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:23.158 13:03:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:23.158 13:03:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:864ae4a3-cd96-439b-8ef2-6dab4d992115 00:41:23.158 13:03:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=864ae4a3-cd96-439b-8ef2-6dab4d992115 00:41:23.158 13:03:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:23.158 13:03:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:23.158 13:03:42 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:41:23.158 13:03:42 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:41:23.158 13:03:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:23.158 13:03:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:23.158 13:03:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:23.158 13:03:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:23.158 13:03:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:23.158 13:03:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:23.158 13:03:42 -- paths/export.sh@5 -- # export PATH 00:41:23.158 13:03:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:23.158 13:03:42 -- nvmf/common.sh@46 -- # : 0 00:41:23.158 13:03:42 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:41:23.158 13:03:42 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:41:23.158 13:03:42 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:41:23.158 13:03:42 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:23.158 13:03:42 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:23.158 13:03:42 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:41:23.158 13:03:42 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:41:23.158 13:03:42 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:41:23.158 13:03:42 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:41:23.158 13:03:42 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:41:23.158 13:03:42 -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:41:23.158 13:03:42 -- target/fio.sh@16 -- # nvmftestinit 00:41:23.158 13:03:42 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:41:23.158 13:03:42 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:23.158 13:03:42 -- nvmf/common.sh@436 -- # prepare_net_devs 00:41:23.158 13:03:42 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:41:23.158 13:03:42 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:41:23.158 13:03:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:23.159 13:03:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:41:23.159 13:03:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:23.159 13:03:42 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:41:23.159 13:03:42 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:41:23.159 13:03:42 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:41:23.159 13:03:42 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:41:23.159 13:03:42 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:41:23.159 13:03:42 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:41:23.159 13:03:42 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:23.159 13:03:42 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:23.159 13:03:42 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:41:23.159 13:03:42 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:41:23.159 13:03:42 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:41:23.159 13:03:42 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:41:23.159 13:03:42 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:41:23.159 13:03:42 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:23.159 13:03:42 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:41:23.159 13:03:42 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:41:23.159 13:03:42 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:41:23.159 13:03:42 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:41:23.159 13:03:42 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:41:23.159 13:03:42 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:41:23.159 Cannot find device "nvmf_tgt_br" 00:41:23.159 13:03:42 -- nvmf/common.sh@154 -- # true 00:41:23.159 13:03:42 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:41:23.159 Cannot find device "nvmf_tgt_br2" 00:41:23.159 13:03:42 -- nvmf/common.sh@155 -- # true 00:41:23.159 13:03:42 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:41:23.159 13:03:42 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:41:23.159 Cannot find device "nvmf_tgt_br" 00:41:23.159 13:03:42 -- nvmf/common.sh@157 -- # true 00:41:23.159 13:03:42 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:41:23.159 Cannot find device "nvmf_tgt_br2" 00:41:23.159 13:03:42 -- nvmf/common.sh@158 -- # true 00:41:23.159 13:03:42 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:41:23.159 13:03:42 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:41:23.159 13:03:42 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:41:23.159 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:41:23.159 13:03:42 -- nvmf/common.sh@161 -- # true 00:41:23.159 13:03:42 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:41:23.159 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:41:23.159 13:03:42 -- nvmf/common.sh@162 -- # true 00:41:23.159 13:03:42 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:41:23.159 13:03:42 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:41:23.159 13:03:42 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:41:23.159 13:03:42 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:41:23.159 13:03:42 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:41:23.159 13:03:42 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:41:23.418 13:03:42 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:41:23.418 13:03:42 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:41:23.418 13:03:42 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:41:23.418 13:03:42 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:41:23.418 13:03:42 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:41:23.418 13:03:42 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:41:23.418 13:03:42 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:41:23.418 13:03:42 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:41:23.418 13:03:42 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:41:23.418 13:03:42 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:41:23.418 13:03:42 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:41:23.418 13:03:42 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:41:23.418 13:03:42 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:41:23.418 13:03:42 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:41:23.418 13:03:42 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:41:23.418 13:03:42 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:41:23.418 13:03:42 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:41:23.418 13:03:42 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:41:23.418 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:23.418 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.091 ms 00:41:23.418 00:41:23.418 --- 10.0.0.2 ping statistics --- 00:41:23.418 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:23.418 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:41:23.418 13:03:42 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:41:23.418 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:41:23.418 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.037 ms 00:41:23.418 00:41:23.418 --- 10.0.0.3 ping statistics --- 00:41:23.418 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:23.418 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:41:23.418 13:03:42 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:41:23.418 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:23.418 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:41:23.418 00:41:23.418 --- 10.0.0.1 ping statistics --- 00:41:23.418 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:23.418 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:41:23.418 13:03:42 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:23.418 13:03:42 -- nvmf/common.sh@421 -- # return 0 00:41:23.418 13:03:42 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:41:23.418 13:03:42 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:23.418 13:03:42 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:41:23.418 13:03:42 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:41:23.418 13:03:42 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:23.418 13:03:42 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:41:23.418 13:03:42 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:41:23.418 13:03:42 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:41:23.418 13:03:42 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:41:23.418 13:03:42 -- common/autotest_common.sh@712 -- # xtrace_disable 00:41:23.418 13:03:42 -- common/autotest_common.sh@10 -- # set +x 00:41:23.418 13:03:42 -- nvmf/common.sh@469 -- # nvmfpid=86300 00:41:23.418 13:03:42 -- nvmf/common.sh@470 -- # waitforlisten 86300 00:41:23.418 13:03:42 -- common/autotest_common.sh@819 -- # '[' -z 86300 ']' 00:41:23.418 13:03:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:23.418 13:03:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:41:23.418 13:03:42 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:41:23.418 13:03:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:23.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:23.418 13:03:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:41:23.418 13:03:42 -- common/autotest_common.sh@10 -- # set +x 00:41:23.418 [2024-07-22 13:03:42.794305] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:41:23.418 [2024-07-22 13:03:42.794399] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:23.677 [2024-07-22 13:03:42.939893] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:23.677 [2024-07-22 13:03:43.040617] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:41:23.677 [2024-07-22 13:03:43.040795] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:23.677 [2024-07-22 13:03:43.040821] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:23.677 [2024-07-22 13:03:43.040841] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:23.677 [2024-07-22 13:03:43.041027] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:41:23.677 [2024-07-22 13:03:43.041178] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:41:23.677 [2024-07-22 13:03:43.041257] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:41:23.677 [2024-07-22 13:03:43.041264] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:41:24.611 13:03:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:41:24.611 13:03:43 -- common/autotest_common.sh@852 -- # return 0 00:41:24.611 13:03:43 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:41:24.611 13:03:43 -- common/autotest_common.sh@718 -- # xtrace_disable 00:41:24.611 13:03:43 -- common/autotest_common.sh@10 -- # set +x 00:41:24.611 13:03:43 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:24.611 13:03:43 -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:41:24.868 [2024-07-22 13:03:44.087417] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:24.868 13:03:44 -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:25.126 13:03:44 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:41:25.126 13:03:44 -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:25.692 13:03:44 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:41:25.692 13:03:44 -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:25.949 13:03:45 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:41:25.949 13:03:45 -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:26.207 13:03:45 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:41:26.207 13:03:45 -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:41:26.465 13:03:45 -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:26.723 13:03:45 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:41:26.723 13:03:45 -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:26.981 13:03:46 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:41:26.981 13:03:46 -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:27.242 13:03:46 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:41:27.242 13:03:46 -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:41:27.509 13:03:46 -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:41:27.767 13:03:46 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:41:27.767 13:03:46 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:41:28.025 13:03:47 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:41:28.025 13:03:47 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:41:28.283 13:03:47 -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:28.542 [2024-07-22 13:03:47.751037] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:28.542 13:03:47 -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:41:28.801 13:03:47 -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:41:29.060 13:03:48 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:864ae4a3-cd96-439b-8ef2-6dab4d992115 --hostid=864ae4a3-cd96-439b-8ef2-6dab4d992115 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:41:29.060 13:03:48 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:41:29.060 13:03:48 -- common/autotest_common.sh@1177 -- # local i=0 00:41:29.060 13:03:48 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:41:29.060 13:03:48 -- common/autotest_common.sh@1179 -- # [[ -n 4 ]] 00:41:29.060 13:03:48 -- common/autotest_common.sh@1180 -- # nvme_device_counter=4 00:41:29.060 13:03:48 -- common/autotest_common.sh@1184 -- # sleep 2 00:41:31.592 13:03:50 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:41:31.592 13:03:50 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:41:31.592 13:03:50 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:41:31.592 13:03:50 -- common/autotest_common.sh@1186 -- # nvme_devices=4 00:41:31.592 13:03:50 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:41:31.592 13:03:50 -- common/autotest_common.sh@1187 -- # return 0 00:41:31.592 13:03:50 -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:41:31.592 [global] 00:41:31.592 thread=1 00:41:31.592 invalidate=1 00:41:31.592 rw=write 00:41:31.592 time_based=1 00:41:31.592 runtime=1 00:41:31.592 ioengine=libaio 00:41:31.592 direct=1 00:41:31.592 bs=4096 00:41:31.592 iodepth=1 00:41:31.592 norandommap=0 00:41:31.592 numjobs=1 00:41:31.592 00:41:31.592 verify_dump=1 00:41:31.592 verify_backlog=512 00:41:31.593 verify_state_save=0 00:41:31.593 do_verify=1 00:41:31.593 verify=crc32c-intel 00:41:31.593 [job0] 00:41:31.593 filename=/dev/nvme0n1 00:41:31.593 [job1] 00:41:31.593 filename=/dev/nvme0n2 00:41:31.593 [job2] 00:41:31.593 filename=/dev/nvme0n3 00:41:31.593 [job3] 00:41:31.593 filename=/dev/nvme0n4 00:41:31.593 Could not set queue depth (nvme0n1) 00:41:31.593 Could not set queue depth (nvme0n2) 00:41:31.593 Could not set queue depth (nvme0n3) 00:41:31.593 Could not set queue depth (nvme0n4) 00:41:31.593 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:31.593 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:31.593 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:31.593 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:31.593 fio-3.35 00:41:31.593 Starting 4 threads 00:41:32.527 00:41:32.527 job0: (groupid=0, jobs=1): err= 0: pid=86593: Mon Jul 22 13:03:51 2024 00:41:32.527 read: IOPS=2958, BW=11.6MiB/s (12.1MB/s)(11.6MiB/1001msec) 00:41:32.527 slat (nsec): min=13108, max=59221, avg=17171.43, stdev=3997.68 00:41:32.527 clat (usec): min=128, max=271, avg=160.17, stdev=12.73 00:41:32.527 lat (usec): min=143, max=289, avg=177.34, stdev=13.53 00:41:32.527 clat percentiles (usec): 00:41:32.527 | 1.00th=[ 137], 5.00th=[ 143], 10.00th=[ 147], 20.00th=[ 151], 00:41:32.527 | 30.00th=[ 153], 40.00th=[ 157], 50.00th=[ 159], 60.00th=[ 161], 00:41:32.527 | 70.00th=[ 165], 80.00th=[ 169], 90.00th=[ 176], 95.00th=[ 182], 00:41:32.527 | 99.00th=[ 200], 99.50th=[ 210], 99.90th=[ 253], 99.95th=[ 262], 00:41:32.527 | 99.99th=[ 273] 00:41:32.527 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:41:32.527 slat (usec): min=18, max=138, avg=24.43, stdev= 5.72 00:41:32.527 clat (usec): min=79, max=2145, avg=126.48, stdev=39.04 00:41:32.527 lat (usec): min=115, max=2170, avg=150.91, stdev=39.57 00:41:32.527 clat percentiles (usec): 00:41:32.527 | 1.00th=[ 104], 5.00th=[ 111], 10.00th=[ 113], 20.00th=[ 117], 00:41:32.527 | 30.00th=[ 119], 40.00th=[ 122], 50.00th=[ 124], 60.00th=[ 127], 00:41:32.527 | 70.00th=[ 131], 80.00th=[ 135], 90.00th=[ 141], 95.00th=[ 149], 00:41:32.527 | 99.00th=[ 165], 99.50th=[ 176], 99.90th=[ 245], 99.95th=[ 457], 00:41:32.527 | 99.99th=[ 2147] 00:41:32.527 bw ( KiB/s): min=12288, max=12288, per=30.03%, avg=12288.00, stdev= 0.00, samples=1 00:41:32.527 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:41:32.527 lat (usec) : 100=0.05%, 250=99.87%, 500=0.07% 00:41:32.527 lat (msec) : 4=0.02% 00:41:32.527 cpu : usr=2.50%, sys=9.30%, ctx=6033, majf=0, minf=7 00:41:32.527 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:32.527 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:32.527 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:32.527 issued rwts: total=2961,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:32.527 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:32.527 job1: (groupid=0, jobs=1): err= 0: pid=86594: Mon Jul 22 13:03:51 2024 00:41:32.527 read: IOPS=1803, BW=7213KiB/s (7386kB/s)(7220KiB/1001msec) 00:41:32.527 slat (nsec): min=8786, max=47555, avg=15769.66, stdev=3521.67 00:41:32.527 clat (usec): min=213, max=402, avg=267.55, stdev=22.63 00:41:32.527 lat (usec): min=235, max=417, avg=283.32, stdev=22.50 00:41:32.527 clat percentiles (usec): 00:41:32.527 | 1.00th=[ 229], 5.00th=[ 239], 10.00th=[ 245], 20.00th=[ 251], 00:41:32.527 | 30.00th=[ 255], 40.00th=[ 260], 50.00th=[ 265], 60.00th=[ 269], 00:41:32.527 | 70.00th=[ 273], 80.00th=[ 281], 90.00th=[ 293], 95.00th=[ 306], 00:41:32.527 | 99.00th=[ 347], 99.50th=[ 367], 99.90th=[ 404], 99.95th=[ 404], 00:41:32.527 | 99.99th=[ 404] 00:41:32.527 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:41:32.527 slat (nsec): min=10811, max=68189, avg=23012.52, stdev=5899.98 00:41:32.527 clat (usec): min=100, max=7782, avg=212.15, stdev=210.09 00:41:32.527 lat (usec): min=125, max=7806, avg=235.16, stdev=209.78 00:41:32.527 clat percentiles (usec): 00:41:32.527 | 1.00th=[ 115], 5.00th=[ 122], 10.00th=[ 127], 20.00th=[ 139], 00:41:32.527 | 30.00th=[ 192], 40.00th=[ 204], 50.00th=[ 210], 60.00th=[ 219], 00:41:32.527 | 70.00th=[ 227], 80.00th=[ 237], 90.00th=[ 255], 95.00th=[ 269], 00:41:32.527 | 99.00th=[ 310], 99.50th=[ 717], 99.90th=[ 2343], 99.95th=[ 2442], 00:41:32.527 | 99.99th=[ 7767] 00:41:32.527 bw ( KiB/s): min= 8192, max= 8192, per=20.02%, avg=8192.00, stdev= 0.00, samples=1 00:41:32.527 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:41:32.527 lat (usec) : 250=55.44%, 500=44.23%, 750=0.08% 00:41:32.527 lat (msec) : 2=0.13%, 4=0.10%, 10=0.03% 00:41:32.527 cpu : usr=1.60%, sys=5.70%, ctx=3854, majf=0, minf=11 00:41:32.527 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:32.527 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:32.527 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:32.527 issued rwts: total=1805,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:32.527 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:32.527 job2: (groupid=0, jobs=1): err= 0: pid=86595: Mon Jul 22 13:03:51 2024 00:41:32.527 read: IOPS=1933, BW=7732KiB/s (7918kB/s)(7740KiB/1001msec) 00:41:32.527 slat (nsec): min=10593, max=43972, avg=14449.75, stdev=3450.28 00:41:32.527 clat (usec): min=141, max=395, avg=263.36, stdev=31.67 00:41:32.527 lat (usec): min=160, max=408, avg=277.81, stdev=30.91 00:41:32.527 clat percentiles (usec): 00:41:32.527 | 1.00th=[ 155], 5.00th=[ 184], 10.00th=[ 241], 20.00th=[ 251], 00:41:32.527 | 30.00th=[ 255], 40.00th=[ 262], 50.00th=[ 265], 60.00th=[ 269], 00:41:32.527 | 70.00th=[ 273], 80.00th=[ 281], 90.00th=[ 293], 95.00th=[ 306], 00:41:32.527 | 99.00th=[ 343], 99.50th=[ 359], 99.90th=[ 388], 99.95th=[ 396], 00:41:32.527 | 99.99th=[ 396] 00:41:32.527 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:41:32.528 slat (nsec): min=11421, max=83667, avg=22966.46, stdev=5459.21 00:41:32.528 clat (usec): min=110, max=341, avg=199.28, stdev=45.30 00:41:32.528 lat (usec): min=134, max=356, avg=222.24, stdev=43.47 00:41:32.528 clat percentiles (usec): 00:41:32.528 | 1.00th=[ 118], 5.00th=[ 125], 10.00th=[ 130], 20.00th=[ 141], 00:41:32.528 | 30.00th=[ 190], 40.00th=[ 202], 50.00th=[ 208], 60.00th=[ 217], 00:41:32.528 | 70.00th=[ 225], 80.00th=[ 237], 90.00th=[ 253], 95.00th=[ 265], 00:41:32.528 | 99.00th=[ 289], 99.50th=[ 302], 99.90th=[ 318], 99.95th=[ 318], 00:41:32.528 | 99.99th=[ 343] 00:41:32.528 bw ( KiB/s): min= 8488, max= 8488, per=20.74%, avg=8488.00, stdev= 0.00, samples=1 00:41:32.528 iops : min= 2122, max= 2122, avg=2122.00, stdev= 0.00, samples=1 00:41:32.528 lat (usec) : 250=55.49%, 500=44.51% 00:41:32.528 cpu : usr=2.20%, sys=5.40%, ctx=3986, majf=0, minf=6 00:41:32.528 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:32.528 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:32.528 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:32.528 issued rwts: total=1935,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:32.528 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:32.528 job3: (groupid=0, jobs=1): err= 0: pid=86596: Mon Jul 22 13:03:51 2024 00:41:32.528 read: IOPS=2806, BW=11.0MiB/s (11.5MB/s)(11.0MiB/1001msec) 00:41:32.528 slat (nsec): min=13072, max=51040, avg=16170.16, stdev=3367.89 00:41:32.528 clat (usec): min=139, max=250, avg=167.70, stdev=12.59 00:41:32.528 lat (usec): min=152, max=267, avg=183.87, stdev=13.16 00:41:32.528 clat percentiles (usec): 00:41:32.528 | 1.00th=[ 147], 5.00th=[ 151], 10.00th=[ 153], 20.00th=[ 157], 00:41:32.528 | 30.00th=[ 161], 40.00th=[ 163], 50.00th=[ 167], 60.00th=[ 169], 00:41:32.528 | 70.00th=[ 174], 80.00th=[ 178], 90.00th=[ 184], 95.00th=[ 190], 00:41:32.528 | 99.00th=[ 208], 99.50th=[ 217], 99.90th=[ 239], 99.95th=[ 241], 00:41:32.528 | 99.99th=[ 251] 00:41:32.528 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:41:32.528 slat (nsec): min=18619, max=95481, avg=23773.20, stdev=5085.63 00:41:32.528 clat (usec): min=102, max=208, avg=130.25, stdev=12.47 00:41:32.528 lat (usec): min=122, max=301, avg=154.02, stdev=14.02 00:41:32.528 clat percentiles (usec): 00:41:32.528 | 1.00th=[ 111], 5.00th=[ 114], 10.00th=[ 117], 20.00th=[ 120], 00:41:32.528 | 30.00th=[ 123], 40.00th=[ 126], 50.00th=[ 129], 60.00th=[ 133], 00:41:32.528 | 70.00th=[ 135], 80.00th=[ 141], 90.00th=[ 147], 95.00th=[ 153], 00:41:32.528 | 99.00th=[ 167], 99.50th=[ 176], 99.90th=[ 186], 99.95th=[ 206], 00:41:32.528 | 99.99th=[ 208] 00:41:32.528 bw ( KiB/s): min=12288, max=12288, per=30.03%, avg=12288.00, stdev= 0.00, samples=1 00:41:32.528 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:41:32.528 lat (usec) : 250=99.98%, 500=0.02% 00:41:32.528 cpu : usr=2.20%, sys=8.80%, ctx=5881, majf=0, minf=13 00:41:32.528 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:32.528 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:32.528 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:32.528 issued rwts: total=2809,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:32.528 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:32.528 00:41:32.528 Run status group 0 (all jobs): 00:41:32.528 READ: bw=37.1MiB/s (38.9MB/s), 7213KiB/s-11.6MiB/s (7386kB/s-12.1MB/s), io=37.1MiB (39.0MB), run=1001-1001msec 00:41:32.528 WRITE: bw=40.0MiB/s (41.9MB/s), 8184KiB/s-12.0MiB/s (8380kB/s-12.6MB/s), io=40.0MiB (41.9MB), run=1001-1001msec 00:41:32.528 00:41:32.528 Disk stats (read/write): 00:41:32.528 nvme0n1: ios=2610/2611, merge=0/0, ticks=446/356, in_queue=802, util=88.48% 00:41:32.528 nvme0n2: ios=1568/1800, merge=0/0, ticks=433/371, in_queue=804, util=87.51% 00:41:32.528 nvme0n3: ios=1536/1941, merge=0/0, ticks=401/397, in_queue=798, util=89.13% 00:41:32.528 nvme0n4: ios=2482/2560, merge=0/0, ticks=432/368, in_queue=800, util=89.79% 00:41:32.528 13:03:51 -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:41:32.528 [global] 00:41:32.528 thread=1 00:41:32.528 invalidate=1 00:41:32.528 rw=randwrite 00:41:32.528 time_based=1 00:41:32.528 runtime=1 00:41:32.528 ioengine=libaio 00:41:32.528 direct=1 00:41:32.528 bs=4096 00:41:32.528 iodepth=1 00:41:32.528 norandommap=0 00:41:32.528 numjobs=1 00:41:32.528 00:41:32.528 verify_dump=1 00:41:32.528 verify_backlog=512 00:41:32.528 verify_state_save=0 00:41:32.528 do_verify=1 00:41:32.528 verify=crc32c-intel 00:41:32.528 [job0] 00:41:32.528 filename=/dev/nvme0n1 00:41:32.528 [job1] 00:41:32.528 filename=/dev/nvme0n2 00:41:32.528 [job2] 00:41:32.528 filename=/dev/nvme0n3 00:41:32.528 [job3] 00:41:32.528 filename=/dev/nvme0n4 00:41:32.528 Could not set queue depth (nvme0n1) 00:41:32.528 Could not set queue depth (nvme0n2) 00:41:32.528 Could not set queue depth (nvme0n3) 00:41:32.528 Could not set queue depth (nvme0n4) 00:41:32.786 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:32.786 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:32.786 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:32.786 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:32.786 fio-3.35 00:41:32.786 Starting 4 threads 00:41:34.195 00:41:34.195 job0: (groupid=0, jobs=1): err= 0: pid=86659: Mon Jul 22 13:03:53 2024 00:41:34.195 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:41:34.195 slat (nsec): min=12039, max=47196, avg=14854.35, stdev=3659.07 00:41:34.195 clat (usec): min=124, max=735, avg=154.95, stdev=17.66 00:41:34.195 lat (usec): min=138, max=748, avg=169.81, stdev=17.89 00:41:34.195 clat percentiles (usec): 00:41:34.195 | 1.00th=[ 133], 5.00th=[ 137], 10.00th=[ 141], 20.00th=[ 145], 00:41:34.195 | 30.00th=[ 147], 40.00th=[ 151], 50.00th=[ 153], 60.00th=[ 157], 00:41:34.195 | 70.00th=[ 161], 80.00th=[ 165], 90.00th=[ 174], 95.00th=[ 180], 00:41:34.195 | 99.00th=[ 192], 99.50th=[ 198], 99.90th=[ 210], 99.95th=[ 474], 00:41:34.195 | 99.99th=[ 734] 00:41:34.195 write: IOPS=3287, BW=12.8MiB/s (13.5MB/s)(12.9MiB/1001msec); 0 zone resets 00:41:34.195 slat (nsec): min=17225, max=84054, avg=21131.12, stdev=4999.90 00:41:34.195 clat (usec): min=92, max=1109, avg=121.00, stdev=22.73 00:41:34.195 lat (usec): min=112, max=1129, avg=142.13, stdev=23.14 00:41:34.195 clat percentiles (usec): 00:41:34.195 | 1.00th=[ 98], 5.00th=[ 103], 10.00th=[ 106], 20.00th=[ 111], 00:41:34.195 | 30.00th=[ 114], 40.00th=[ 117], 50.00th=[ 119], 60.00th=[ 122], 00:41:34.195 | 70.00th=[ 126], 80.00th=[ 131], 90.00th=[ 139], 95.00th=[ 145], 00:41:34.195 | 99.00th=[ 157], 99.50th=[ 161], 99.90th=[ 221], 99.95th=[ 498], 00:41:34.195 | 99.99th=[ 1106] 00:41:34.195 bw ( KiB/s): min=13240, max=13240, per=31.68%, avg=13240.00, stdev= 0.00, samples=1 00:41:34.195 iops : min= 3310, max= 3310, avg=3310.00, stdev= 0.00, samples=1 00:41:34.195 lat (usec) : 100=0.97%, 250=98.95%, 500=0.05%, 750=0.02% 00:41:34.195 lat (msec) : 2=0.02% 00:41:34.195 cpu : usr=2.00%, sys=8.70%, ctx=6367, majf=0, minf=9 00:41:34.195 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:34.195 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:34.195 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:34.195 issued rwts: total=3072,3291,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:34.195 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:34.195 job1: (groupid=0, jobs=1): err= 0: pid=86660: Mon Jul 22 13:03:53 2024 00:41:34.195 read: IOPS=1717, BW=6869KiB/s (7034kB/s)(6876KiB/1001msec) 00:41:34.195 slat (nsec): min=10910, max=44056, avg=12954.26, stdev=3458.63 00:41:34.195 clat (usec): min=233, max=373, avg=276.70, stdev=17.94 00:41:34.195 lat (usec): min=245, max=390, avg=289.65, stdev=18.14 00:41:34.195 clat percentiles (usec): 00:41:34.195 | 1.00th=[ 241], 5.00th=[ 249], 10.00th=[ 255], 20.00th=[ 262], 00:41:34.195 | 30.00th=[ 269], 40.00th=[ 273], 50.00th=[ 277], 60.00th=[ 281], 00:41:34.195 | 70.00th=[ 285], 80.00th=[ 289], 90.00th=[ 297], 95.00th=[ 306], 00:41:34.195 | 99.00th=[ 330], 99.50th=[ 334], 99.90th=[ 375], 99.95th=[ 375], 00:41:34.195 | 99.99th=[ 375] 00:41:34.195 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:41:34.195 slat (usec): min=15, max=112, avg=22.92, stdev= 6.28 00:41:34.195 clat (usec): min=103, max=300, avg=218.87, stdev=18.79 00:41:34.195 lat (usec): min=147, max=316, avg=241.80, stdev=18.48 00:41:34.195 clat percentiles (usec): 00:41:34.195 | 1.00th=[ 180], 5.00th=[ 192], 10.00th=[ 196], 20.00th=[ 204], 00:41:34.195 | 30.00th=[ 208], 40.00th=[ 215], 50.00th=[ 219], 60.00th=[ 223], 00:41:34.195 | 70.00th=[ 229], 80.00th=[ 235], 90.00th=[ 243], 95.00th=[ 251], 00:41:34.195 | 99.00th=[ 269], 99.50th=[ 269], 99.90th=[ 293], 99.95th=[ 297], 00:41:34.195 | 99.99th=[ 302] 00:41:34.195 bw ( KiB/s): min= 8192, max= 8192, per=19.60%, avg=8192.00, stdev= 0.00, samples=1 00:41:34.195 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:41:34.195 lat (usec) : 250=54.02%, 500=45.98% 00:41:34.195 cpu : usr=1.10%, sys=5.80%, ctx=3767, majf=0, minf=7 00:41:34.195 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:34.195 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:34.195 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:34.195 issued rwts: total=1719,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:34.195 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:34.196 job2: (groupid=0, jobs=1): err= 0: pid=86661: Mon Jul 22 13:03:53 2024 00:41:34.196 read: IOPS=2589, BW=10.1MiB/s (10.6MB/s)(10.1MiB/1001msec) 00:41:34.196 slat (usec): min=12, max=229, avg=17.05, stdev=11.20 00:41:34.196 clat (usec): min=4, max=450, avg=176.38, stdev=20.69 00:41:34.196 lat (usec): min=153, max=465, avg=193.43, stdev=22.40 00:41:34.196 clat percentiles (usec): 00:41:34.196 | 1.00th=[ 149], 5.00th=[ 155], 10.00th=[ 159], 20.00th=[ 163], 00:41:34.196 | 30.00th=[ 167], 40.00th=[ 169], 50.00th=[ 174], 60.00th=[ 178], 00:41:34.196 | 70.00th=[ 184], 80.00th=[ 188], 90.00th=[ 196], 95.00th=[ 204], 00:41:34.196 | 99.00th=[ 235], 99.50th=[ 306], 99.90th=[ 351], 99.95th=[ 355], 00:41:34.196 | 99.99th=[ 453] 00:41:34.196 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:41:34.196 slat (usec): min=17, max=286, avg=23.17, stdev= 7.29 00:41:34.196 clat (usec): min=5, max=741, avg=135.71, stdev=22.08 00:41:34.196 lat (usec): min=124, max=766, avg=158.89, stdev=23.06 00:41:34.196 clat percentiles (usec): 00:41:34.196 | 1.00th=[ 111], 5.00th=[ 117], 10.00th=[ 120], 20.00th=[ 124], 00:41:34.196 | 30.00th=[ 127], 40.00th=[ 131], 50.00th=[ 135], 60.00th=[ 137], 00:41:34.196 | 70.00th=[ 141], 80.00th=[ 147], 90.00th=[ 155], 95.00th=[ 161], 00:41:34.196 | 99.00th=[ 178], 99.50th=[ 182], 99.90th=[ 424], 99.95th=[ 701], 00:41:34.196 | 99.99th=[ 742] 00:41:34.196 bw ( KiB/s): min=12288, max=12288, per=29.40%, avg=12288.00, stdev= 0.00, samples=1 00:41:34.196 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:41:34.196 lat (usec) : 10=0.04%, 50=0.05%, 100=0.02%, 250=99.38%, 500=0.48% 00:41:34.196 lat (usec) : 750=0.04% 00:41:34.196 cpu : usr=1.70%, sys=8.90%, ctx=5674, majf=0, minf=17 00:41:34.196 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:34.196 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:34.196 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:34.196 issued rwts: total=2592,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:34.196 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:34.196 job3: (groupid=0, jobs=1): err= 0: pid=86662: Mon Jul 22 13:03:53 2024 00:41:34.196 read: IOPS=1718, BW=6873KiB/s (7038kB/s)(6880KiB/1001msec) 00:41:34.196 slat (nsec): min=13093, max=53653, avg=15372.93, stdev=4009.92 00:41:34.196 clat (usec): min=141, max=346, avg=274.14, stdev=17.83 00:41:34.196 lat (usec): min=168, max=364, avg=289.51, stdev=17.99 00:41:34.196 clat percentiles (usec): 00:41:34.196 | 1.00th=[ 241], 5.00th=[ 247], 10.00th=[ 251], 20.00th=[ 260], 00:41:34.196 | 30.00th=[ 265], 40.00th=[ 269], 50.00th=[ 273], 60.00th=[ 277], 00:41:34.196 | 70.00th=[ 285], 80.00th=[ 289], 90.00th=[ 297], 95.00th=[ 306], 00:41:34.196 | 99.00th=[ 318], 99.50th=[ 326], 99.90th=[ 343], 99.95th=[ 347], 00:41:34.196 | 99.99th=[ 347] 00:41:34.196 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:41:34.196 slat (nsec): min=18827, max=78650, avg=22827.37, stdev=5881.34 00:41:34.196 clat (usec): min=140, max=396, avg=219.04, stdev=19.05 00:41:34.196 lat (usec): min=167, max=416, avg=241.87, stdev=18.55 00:41:34.196 clat percentiles (usec): 00:41:34.196 | 1.00th=[ 178], 5.00th=[ 192], 10.00th=[ 198], 20.00th=[ 204], 00:41:34.196 | 30.00th=[ 208], 40.00th=[ 215], 50.00th=[ 219], 60.00th=[ 223], 00:41:34.196 | 70.00th=[ 227], 80.00th=[ 233], 90.00th=[ 243], 95.00th=[ 251], 00:41:34.196 | 99.00th=[ 269], 99.50th=[ 277], 99.90th=[ 297], 99.95th=[ 359], 00:41:34.196 | 99.99th=[ 396] 00:41:34.196 bw ( KiB/s): min= 8208, max= 8208, per=19.64%, avg=8208.00, stdev= 0.00, samples=1 00:41:34.196 iops : min= 2052, max= 2052, avg=2052.00, stdev= 0.00, samples=1 00:41:34.196 lat (usec) : 250=54.86%, 500=45.14% 00:41:34.196 cpu : usr=1.60%, sys=5.20%, ctx=3769, majf=0, minf=12 00:41:34.196 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:34.196 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:34.196 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:34.196 issued rwts: total=1720,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:34.196 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:34.196 00:41:34.196 Run status group 0 (all jobs): 00:41:34.196 READ: bw=35.5MiB/s (37.2MB/s), 6869KiB/s-12.0MiB/s (7034kB/s-12.6MB/s), io=35.6MiB (37.3MB), run=1001-1001msec 00:41:34.196 WRITE: bw=40.8MiB/s (42.8MB/s), 8184KiB/s-12.8MiB/s (8380kB/s-13.5MB/s), io=40.9MiB (42.8MB), run=1001-1001msec 00:41:34.196 00:41:34.196 Disk stats (read/write): 00:41:34.196 nvme0n1: ios=2610/2866, merge=0/0, ticks=449/382, in_queue=831, util=87.47% 00:41:34.196 nvme0n2: ios=1557/1654, merge=0/0, ticks=427/384, in_queue=811, util=86.78% 00:41:34.196 nvme0n3: ios=2234/2560, merge=0/0, ticks=409/383, in_queue=792, util=88.91% 00:41:34.196 nvme0n4: ios=1536/1653, merge=0/0, ticks=438/378, in_queue=816, util=89.57% 00:41:34.196 13:03:53 -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:41:34.196 [global] 00:41:34.196 thread=1 00:41:34.196 invalidate=1 00:41:34.196 rw=write 00:41:34.196 time_based=1 00:41:34.196 runtime=1 00:41:34.196 ioengine=libaio 00:41:34.196 direct=1 00:41:34.196 bs=4096 00:41:34.196 iodepth=128 00:41:34.196 norandommap=0 00:41:34.196 numjobs=1 00:41:34.196 00:41:34.196 verify_dump=1 00:41:34.196 verify_backlog=512 00:41:34.196 verify_state_save=0 00:41:34.196 do_verify=1 00:41:34.196 verify=crc32c-intel 00:41:34.196 [job0] 00:41:34.196 filename=/dev/nvme0n1 00:41:34.196 [job1] 00:41:34.196 filename=/dev/nvme0n2 00:41:34.196 [job2] 00:41:34.196 filename=/dev/nvme0n3 00:41:34.196 [job3] 00:41:34.196 filename=/dev/nvme0n4 00:41:34.196 Could not set queue depth (nvme0n1) 00:41:34.196 Could not set queue depth (nvme0n2) 00:41:34.196 Could not set queue depth (nvme0n3) 00:41:34.196 Could not set queue depth (nvme0n4) 00:41:34.196 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:34.196 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:34.196 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:34.196 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:34.196 fio-3.35 00:41:34.196 Starting 4 threads 00:41:35.574 00:41:35.574 job0: (groupid=0, jobs=1): err= 0: pid=86717: Mon Jul 22 13:03:54 2024 00:41:35.574 read: IOPS=5626, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1001msec) 00:41:35.574 slat (usec): min=4, max=3319, avg=82.22, stdev=390.34 00:41:35.574 clat (usec): min=7951, max=14571, avg=11138.99, stdev=1110.09 00:41:35.574 lat (usec): min=7968, max=14602, avg=11221.21, stdev=1088.22 00:41:35.574 clat percentiles (usec): 00:41:35.574 | 1.00th=[ 8455], 5.00th=[ 8848], 10.00th=[ 9372], 20.00th=[10290], 00:41:35.574 | 30.00th=[10814], 40.00th=[11076], 50.00th=[11338], 60.00th=[11600], 00:41:35.574 | 70.00th=[11731], 80.00th=[11994], 90.00th=[12256], 95.00th=[12518], 00:41:35.574 | 99.00th=[13435], 99.50th=[13698], 99.90th=[13960], 99.95th=[14484], 00:41:35.574 | 99.99th=[14615] 00:41:35.574 write: IOPS=5697, BW=22.3MiB/s (23.3MB/s)(22.3MiB/1001msec); 0 zone resets 00:41:35.574 slat (usec): min=10, max=3559, avg=86.73, stdev=406.58 00:41:35.574 clat (usec): min=273, max=14785, avg=11184.81, stdev=1477.97 00:41:35.574 lat (usec): min=2620, max=14809, avg=11271.54, stdev=1448.65 00:41:35.574 clat percentiles (usec): 00:41:35.574 | 1.00th=[ 5735], 5.00th=[ 8717], 10.00th=[ 8979], 20.00th=[ 9503], 00:41:35.574 | 30.00th=[11338], 40.00th=[11469], 50.00th=[11731], 60.00th=[11863], 00:41:35.574 | 70.00th=[11994], 80.00th=[12256], 90.00th=[12387], 95.00th=[12649], 00:41:35.574 | 99.00th=[13042], 99.50th=[13304], 99.90th=[14222], 99.95th=[14222], 00:41:35.574 | 99.99th=[14746] 00:41:35.574 bw ( KiB/s): min=23904, max=23904, per=36.45%, avg=23904.00, stdev= 0.00, samples=1 00:41:35.574 iops : min= 5976, max= 5976, avg=5976.00, stdev= 0.00, samples=1 00:41:35.574 lat (usec) : 500=0.01% 00:41:35.574 lat (msec) : 4=0.40%, 10=19.89%, 20=79.71% 00:41:35.574 cpu : usr=4.20%, sys=15.40%, ctx=671, majf=0, minf=13 00:41:35.574 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:41:35.574 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:35.574 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:35.574 issued rwts: total=5632,5703,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:35.574 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:35.574 job1: (groupid=0, jobs=1): err= 0: pid=86718: Mon Jul 22 13:03:54 2024 00:41:35.574 read: IOPS=2547, BW=9.95MiB/s (10.4MB/s)(10.0MiB/1005msec) 00:41:35.574 slat (usec): min=6, max=11087, avg=179.38, stdev=803.56 00:41:35.574 clat (usec): min=14611, max=37145, avg=22336.54, stdev=4375.51 00:41:35.574 lat (usec): min=14643, max=37165, avg=22515.92, stdev=4444.43 00:41:35.574 clat percentiles (usec): 00:41:35.574 | 1.00th=[16057], 5.00th=[16909], 10.00th=[17433], 20.00th=[18482], 00:41:35.574 | 30.00th=[19530], 40.00th=[20317], 50.00th=[20841], 60.00th=[22152], 00:41:35.574 | 70.00th=[25035], 80.00th=[26346], 90.00th=[29230], 95.00th=[31065], 00:41:35.574 | 99.00th=[33162], 99.50th=[33817], 99.90th=[35914], 99.95th=[36439], 00:41:35.574 | 99.99th=[36963] 00:41:35.574 write: IOPS=3026, BW=11.8MiB/s (12.4MB/s)(11.9MiB/1005msec); 0 zone resets 00:41:35.574 slat (usec): min=14, max=5928, avg=169.36, stdev=637.03 00:41:35.574 clat (usec): min=2109, max=37150, avg=22874.57, stdev=5428.13 00:41:35.574 lat (usec): min=5362, max=37175, avg=23043.93, stdev=5454.11 00:41:35.574 clat percentiles (usec): 00:41:35.574 | 1.00th=[10814], 5.00th=[15270], 10.00th=[15533], 20.00th=[18220], 00:41:35.574 | 30.00th=[20579], 40.00th=[21365], 50.00th=[23462], 60.00th=[23987], 00:41:35.574 | 70.00th=[24511], 80.00th=[25035], 90.00th=[30278], 95.00th=[35390], 00:41:35.574 | 99.00th=[36439], 99.50th=[36963], 99.90th=[36963], 99.95th=[36963], 00:41:35.574 | 99.99th=[36963] 00:41:35.574 bw ( KiB/s): min=11024, max=12312, per=17.79%, avg=11668.00, stdev=910.75, samples=2 00:41:35.574 iops : min= 2756, max= 3078, avg=2917.00, stdev=227.69, samples=2 00:41:35.574 lat (msec) : 4=0.02%, 10=0.29%, 20=29.35%, 50=70.35% 00:41:35.574 cpu : usr=2.99%, sys=9.86%, ctx=400, majf=0, minf=11 00:41:35.574 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:41:35.574 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:35.574 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:35.574 issued rwts: total=2560,3042,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:35.574 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:35.574 job2: (groupid=0, jobs=1): err= 0: pid=86719: Mon Jul 22 13:03:54 2024 00:41:35.574 read: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec) 00:41:35.574 slat (usec): min=5, max=7893, avg=194.81, stdev=833.00 00:41:35.574 clat (usec): min=15981, max=40954, avg=25256.27, stdev=3166.22 00:41:35.574 lat (usec): min=15996, max=45232, avg=25451.08, stdev=3092.18 00:41:35.574 clat percentiles (usec): 00:41:35.574 | 1.00th=[17695], 5.00th=[20841], 10.00th=[22152], 20.00th=[23725], 00:41:35.574 | 30.00th=[23987], 40.00th=[24511], 50.00th=[24773], 60.00th=[25035], 00:41:35.574 | 70.00th=[26346], 80.00th=[27395], 90.00th=[28181], 95.00th=[30278], 00:41:35.574 | 99.00th=[36963], 99.50th=[40109], 99.90th=[41157], 99.95th=[41157], 00:41:35.574 | 99.99th=[41157] 00:41:35.574 write: IOPS=2626, BW=10.3MiB/s (10.8MB/s)(10.3MiB/1004msec); 0 zone resets 00:41:35.574 slat (usec): min=13, max=6255, avg=181.40, stdev=763.21 00:41:35.574 clat (usec): min=1385, max=42262, avg=23290.72, stdev=6053.67 00:41:35.574 lat (usec): min=4626, max=42295, avg=23472.12, stdev=6052.51 00:41:35.574 clat percentiles (usec): 00:41:35.574 | 1.00th=[ 5342], 5.00th=[17695], 10.00th=[17957], 20.00th=[18744], 00:41:35.574 | 30.00th=[20055], 40.00th=[20841], 50.00th=[22414], 60.00th=[23200], 00:41:35.574 | 70.00th=[26084], 80.00th=[27395], 90.00th=[31065], 95.00th=[36439], 00:41:35.574 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:41:35.574 | 99.99th=[42206] 00:41:35.574 bw ( KiB/s): min= 9984, max=10517, per=15.63%, avg=10250.50, stdev=376.89, samples=2 00:41:35.574 iops : min= 2496, max= 2629, avg=2562.50, stdev=94.05, samples=2 00:41:35.574 lat (msec) : 2=0.02%, 10=1.14%, 20=14.43%, 50=84.41% 00:41:35.574 cpu : usr=2.49%, sys=8.97%, ctx=273, majf=0, minf=19 00:41:35.574 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:41:35.574 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:35.574 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:35.574 issued rwts: total=2560,2637,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:35.574 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:35.574 job3: (groupid=0, jobs=1): err= 0: pid=86720: Mon Jul 22 13:03:54 2024 00:41:35.574 read: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec) 00:41:35.574 slat (usec): min=8, max=3133, avg=99.17, stdev=428.55 00:41:35.574 clat (usec): min=9768, max=15853, avg=13040.35, stdev=982.59 00:41:35.574 lat (usec): min=9945, max=17903, avg=13139.53, stdev=913.83 00:41:35.574 clat percentiles (usec): 00:41:35.574 | 1.00th=[10421], 5.00th=[11076], 10.00th=[11338], 20.00th=[12518], 00:41:35.574 | 30.00th=[12780], 40.00th=[13042], 50.00th=[13304], 60.00th=[13435], 00:41:35.574 | 70.00th=[13566], 80.00th=[13698], 90.00th=[13960], 95.00th=[14222], 00:41:35.574 | 99.00th=[15139], 99.50th=[15139], 99.90th=[15401], 99.95th=[15401], 00:41:35.574 | 99.99th=[15795] 00:41:35.574 write: IOPS=5075, BW=19.8MiB/s (20.8MB/s)(19.9MiB/1004msec); 0 zone resets 00:41:35.574 slat (usec): min=11, max=3296, avg=99.11, stdev=399.07 00:41:35.574 clat (usec): min=1410, max=15635, avg=13066.46, stdev=1434.54 00:41:35.574 lat (usec): min=3909, max=15690, avg=13165.57, stdev=1409.61 00:41:35.574 clat percentiles (usec): 00:41:35.574 | 1.00th=[ 8455], 5.00th=[10945], 10.00th=[11338], 20.00th=[11863], 00:41:35.574 | 30.00th=[12518], 40.00th=[13173], 50.00th=[13304], 60.00th=[13566], 00:41:35.574 | 70.00th=[13829], 80.00th=[14222], 90.00th=[14615], 95.00th=[14877], 00:41:35.574 | 99.00th=[15401], 99.50th=[15401], 99.90th=[15664], 99.95th=[15664], 00:41:35.574 | 99.99th=[15664] 00:41:35.574 bw ( KiB/s): min=19264, max=20480, per=30.30%, avg=19872.00, stdev=859.84, samples=2 00:41:35.574 iops : min= 4816, max= 5120, avg=4968.00, stdev=214.96, samples=2 00:41:35.574 lat (msec) : 2=0.01%, 4=0.03%, 10=0.93%, 20=99.03% 00:41:35.574 cpu : usr=4.49%, sys=14.16%, ctx=741, majf=0, minf=9 00:41:35.574 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:41:35.574 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:35.574 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:35.574 issued rwts: total=4608,5096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:35.574 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:35.575 00:41:35.575 Run status group 0 (all jobs): 00:41:35.575 READ: bw=59.7MiB/s (62.6MB/s), 9.95MiB/s-22.0MiB/s (10.4MB/s-23.0MB/s), io=60.0MiB (62.9MB), run=1001-1005msec 00:41:35.575 WRITE: bw=64.0MiB/s (67.2MB/s), 10.3MiB/s-22.3MiB/s (10.8MB/s-23.3MB/s), io=64.4MiB (67.5MB), run=1001-1005msec 00:41:35.575 00:41:35.575 Disk stats (read/write): 00:41:35.575 nvme0n1: ios=4799/5120, merge=0/0, ticks=16045/16493, in_queue=32538, util=89.58% 00:41:35.575 nvme0n2: ios=2376/2560, merge=0/0, ticks=16763/17831, in_queue=34594, util=89.20% 00:41:35.575 nvme0n3: ios=2048/2444, merge=0/0, ticks=12536/13432, in_queue=25968, util=89.26% 00:41:35.575 nvme0n4: ios=4096/4359, merge=0/0, ticks=12470/12170, in_queue=24640, util=89.83% 00:41:35.575 13:03:54 -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:41:35.575 [global] 00:41:35.575 thread=1 00:41:35.575 invalidate=1 00:41:35.575 rw=randwrite 00:41:35.575 time_based=1 00:41:35.575 runtime=1 00:41:35.575 ioengine=libaio 00:41:35.575 direct=1 00:41:35.575 bs=4096 00:41:35.575 iodepth=128 00:41:35.575 norandommap=0 00:41:35.575 numjobs=1 00:41:35.575 00:41:35.575 verify_dump=1 00:41:35.575 verify_backlog=512 00:41:35.575 verify_state_save=0 00:41:35.575 do_verify=1 00:41:35.575 verify=crc32c-intel 00:41:35.575 [job0] 00:41:35.575 filename=/dev/nvme0n1 00:41:35.575 [job1] 00:41:35.575 filename=/dev/nvme0n2 00:41:35.575 [job2] 00:41:35.575 filename=/dev/nvme0n3 00:41:35.575 [job3] 00:41:35.575 filename=/dev/nvme0n4 00:41:35.575 Could not set queue depth (nvme0n1) 00:41:35.575 Could not set queue depth (nvme0n2) 00:41:35.575 Could not set queue depth (nvme0n3) 00:41:35.575 Could not set queue depth (nvme0n4) 00:41:35.575 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:35.575 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:35.575 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:35.575 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:35.575 fio-3.35 00:41:35.575 Starting 4 threads 00:41:36.952 00:41:36.952 job0: (groupid=0, jobs=1): err= 0: pid=86773: Mon Jul 22 13:03:55 2024 00:41:36.952 read: IOPS=4063, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1008msec) 00:41:36.952 slat (usec): min=4, max=16762, avg=112.73, stdev=770.85 00:41:36.952 clat (usec): min=4717, max=76388, avg=13845.53, stdev=6945.61 00:41:36.952 lat (usec): min=4726, max=76402, avg=13958.27, stdev=7037.46 00:41:36.952 clat percentiles (usec): 00:41:36.952 | 1.00th=[ 8291], 5.00th=[ 8717], 10.00th=[ 9503], 20.00th=[10159], 00:41:36.952 | 30.00th=[10945], 40.00th=[11731], 50.00th=[12125], 60.00th=[12780], 00:41:36.952 | 70.00th=[13435], 80.00th=[15139], 90.00th=[22152], 95.00th=[22938], 00:41:36.952 | 99.00th=[49546], 99.50th=[64226], 99.90th=[76022], 99.95th=[76022], 00:41:36.952 | 99.99th=[76022] 00:41:36.952 write: IOPS=4422, BW=17.3MiB/s (18.1MB/s)(17.4MiB/1008msec); 0 zone resets 00:41:36.952 slat (usec): min=5, max=29679, avg=114.49, stdev=866.94 00:41:36.952 clat (usec): min=1943, max=76355, avg=15938.89, stdev=9596.60 00:41:36.952 lat (usec): min=4008, max=76366, avg=16053.39, stdev=9660.63 00:41:36.952 clat percentiles (usec): 00:41:36.952 | 1.00th=[ 5080], 5.00th=[ 8160], 10.00th=[ 9241], 20.00th=[10814], 00:41:36.952 | 30.00th=[11207], 40.00th=[11600], 50.00th=[11863], 60.00th=[12256], 00:41:36.952 | 70.00th=[13960], 80.00th=[22938], 90.00th=[25560], 95.00th=[32900], 00:41:36.952 | 99.00th=[64750], 99.50th=[66323], 99.90th=[66323], 99.95th=[66323], 00:41:36.952 | 99.99th=[76022] 00:41:36.952 bw ( KiB/s): min=13507, max=21160, per=25.67%, avg=17333.50, stdev=5411.49, samples=2 00:41:36.952 iops : min= 3376, max= 5290, avg=4333.00, stdev=1353.40, samples=2 00:41:36.952 lat (msec) : 2=0.01%, 4=0.01%, 10=16.48%, 20=63.02%, 50=19.16% 00:41:36.952 lat (msec) : 100=1.31% 00:41:36.952 cpu : usr=4.07%, sys=10.63%, ctx=461, majf=0, minf=5 00:41:36.952 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:41:36.952 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:36.952 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:36.952 issued rwts: total=4096,4458,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:36.952 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:36.952 job1: (groupid=0, jobs=1): err= 0: pid=86774: Mon Jul 22 13:03:55 2024 00:41:36.952 read: IOPS=5565, BW=21.7MiB/s (22.8MB/s)(22.0MiB/1012msec) 00:41:36.952 slat (usec): min=4, max=9927, avg=86.87, stdev=574.59 00:41:36.952 clat (usec): min=4803, max=21795, avg=11523.86, stdev=2587.20 00:41:36.952 lat (usec): min=4815, max=21811, avg=11610.73, stdev=2621.14 00:41:36.952 clat percentiles (usec): 00:41:36.952 | 1.00th=[ 7046], 5.00th=[ 8848], 10.00th=[ 9110], 20.00th=[ 9634], 00:41:36.952 | 30.00th=[10159], 40.00th=[10552], 50.00th=[10814], 60.00th=[11207], 00:41:36.952 | 70.00th=[12387], 80.00th=[12911], 90.00th=[14615], 95.00th=[17171], 00:41:36.952 | 99.00th=[20317], 99.50th=[20841], 99.90th=[21365], 99.95th=[21890], 00:41:36.952 | 99.99th=[21890] 00:41:36.952 write: IOPS=5824, BW=22.8MiB/s (23.9MB/s)(23.0MiB/1012msec); 0 zone resets 00:41:36.952 slat (usec): min=5, max=9219, avg=79.23, stdev=507.48 00:41:36.952 clat (usec): min=3660, max=23360, avg=10740.83, stdev=2319.28 00:41:36.952 lat (usec): min=3689, max=23375, avg=10820.06, stdev=2377.15 00:41:36.952 clat percentiles (usec): 00:41:36.952 | 1.00th=[ 4621], 5.00th=[ 5604], 10.00th=[ 8029], 20.00th=[ 9634], 00:41:36.952 | 30.00th=[10421], 40.00th=[10683], 50.00th=[11076], 60.00th=[11600], 00:41:36.952 | 70.00th=[11863], 80.00th=[12125], 90.00th=[12256], 95.00th=[12518], 00:41:36.952 | 99.00th=[19530], 99.50th=[21103], 99.90th=[23200], 99.95th=[23462], 00:41:36.952 | 99.99th=[23462] 00:41:36.952 bw ( KiB/s): min=21560, max=24617, per=34.20%, avg=23088.50, stdev=2161.63, samples=2 00:41:36.952 iops : min= 5390, max= 6154, avg=5772.00, stdev=540.23, samples=2 00:41:36.952 lat (msec) : 4=0.04%, 10=24.98%, 20=73.86%, 50=1.12% 00:41:36.952 cpu : usr=4.95%, sys=13.75%, ctx=641, majf=0, minf=13 00:41:36.952 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:41:36.952 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:36.952 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:36.952 issued rwts: total=5632,5894,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:36.952 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:36.952 job2: (groupid=0, jobs=1): err= 0: pid=86775: Mon Jul 22 13:03:55 2024 00:41:36.952 read: IOPS=2021, BW=8087KiB/s (8281kB/s)(8192KiB/1013msec) 00:41:36.952 slat (usec): min=3, max=19368, avg=185.06, stdev=1047.77 00:41:36.952 clat (usec): min=6006, max=41905, avg=23504.82, stdev=6559.13 00:41:36.952 lat (usec): min=6020, max=41939, avg=23689.88, stdev=6632.52 00:41:36.952 clat percentiles (usec): 00:41:36.952 | 1.00th=[10421], 5.00th=[11731], 10.00th=[13173], 20.00th=[16712], 00:41:36.952 | 30.00th=[22152], 40.00th=[22676], 50.00th=[23987], 60.00th=[25297], 00:41:36.952 | 70.00th=[27132], 80.00th=[29230], 90.00th=[31065], 95.00th=[33162], 00:41:36.952 | 99.00th=[36439], 99.50th=[37487], 99.90th=[38011], 99.95th=[39060], 00:41:36.952 | 99.99th=[41681] 00:41:36.952 write: IOPS=2468, BW=9876KiB/s (10.1MB/s)(9.77MiB/1013msec); 0 zone resets 00:41:36.952 slat (usec): min=4, max=21228, avg=236.91, stdev=1229.16 00:41:36.952 clat (msec): min=4, max=116, avg=32.08, stdev=23.61 00:41:36.952 lat (msec): min=4, max=116, avg=32.31, stdev=23.75 00:41:36.952 clat percentiles (msec): 00:41:36.952 | 1.00th=[ 8], 5.00th=[ 12], 10.00th=[ 17], 20.00th=[ 21], 00:41:36.952 | 30.00th=[ 23], 40.00th=[ 24], 50.00th=[ 25], 60.00th=[ 26], 00:41:36.952 | 70.00th=[ 27], 80.00th=[ 32], 90.00th=[ 68], 95.00th=[ 101], 00:41:36.952 | 99.00th=[ 115], 99.50th=[ 115], 99.90th=[ 116], 99.95th=[ 116], 00:41:36.952 | 99.99th=[ 116] 00:41:36.952 bw ( KiB/s): min= 9218, max= 9784, per=14.07%, avg=9501.00, stdev=400.22, samples=2 00:41:36.952 iops : min= 2304, max= 2446, avg=2375.00, stdev=100.41, samples=2 00:41:36.952 lat (msec) : 10=1.17%, 20=20.03%, 50=71.29%, 100=4.70%, 250=2.81% 00:41:36.952 cpu : usr=2.17%, sys=6.03%, ctx=498, majf=0, minf=15 00:41:36.952 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:41:36.952 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:36.952 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:36.952 issued rwts: total=2048,2501,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:36.952 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:36.952 job3: (groupid=0, jobs=1): err= 0: pid=86776: Mon Jul 22 13:03:55 2024 00:41:36.952 read: IOPS=4039, BW=15.8MiB/s (16.5MB/s)(16.0MiB/1014msec) 00:41:36.952 slat (usec): min=4, max=11676, avg=125.86, stdev=770.18 00:41:36.952 clat (usec): min=5074, max=35666, avg=15931.21, stdev=5806.39 00:41:36.952 lat (usec): min=5087, max=36407, avg=16057.07, stdev=5872.93 00:41:36.952 clat percentiles (usec): 00:41:36.952 | 1.00th=[ 8356], 5.00th=[ 9896], 10.00th=[10421], 20.00th=[11207], 00:41:36.952 | 30.00th=[12125], 40.00th=[12780], 50.00th=[13304], 60.00th=[14877], 00:41:36.952 | 70.00th=[17695], 80.00th=[22152], 90.00th=[24511], 95.00th=[27919], 00:41:36.952 | 99.00th=[30540], 99.50th=[30802], 99.90th=[35390], 99.95th=[35914], 00:41:36.952 | 99.99th=[35914] 00:41:36.953 write: IOPS=4203, BW=16.4MiB/s (17.2MB/s)(16.6MiB/1014msec); 0 zone resets 00:41:36.953 slat (usec): min=5, max=10164, avg=105.85, stdev=601.21 00:41:36.953 clat (usec): min=3371, max=36646, avg=14823.28, stdev=6046.02 00:41:36.953 lat (usec): min=3400, max=40141, avg=14929.13, stdev=6107.78 00:41:36.953 clat percentiles (usec): 00:41:36.953 | 1.00th=[ 4817], 5.00th=[ 7242], 10.00th=[ 9896], 20.00th=[11731], 00:41:36.953 | 30.00th=[11994], 40.00th=[12256], 50.00th=[12649], 60.00th=[13304], 00:41:36.953 | 70.00th=[13829], 80.00th=[20317], 90.00th=[25560], 95.00th=[27395], 00:41:36.953 | 99.00th=[31851], 99.50th=[32113], 99.90th=[35390], 99.95th=[35390], 00:41:36.953 | 99.99th=[36439] 00:41:36.953 bw ( KiB/s): min=12592, max=20480, per=24.49%, avg=16536.00, stdev=5577.66, samples=2 00:41:36.953 iops : min= 3148, max= 5120, avg=4134.00, stdev=1394.41, samples=2 00:41:36.953 lat (msec) : 4=0.16%, 10=7.79%, 20=68.80%, 50=23.26% 00:41:36.953 cpu : usr=3.65%, sys=11.25%, ctx=677, majf=0, minf=17 00:41:36.953 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:41:36.953 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:36.953 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:36.953 issued rwts: total=4096,4262,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:36.953 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:36.953 00:41:36.953 Run status group 0 (all jobs): 00:41:36.953 READ: bw=61.1MiB/s (64.1MB/s), 8087KiB/s-21.7MiB/s (8281kB/s-22.8MB/s), io=62.0MiB (65.0MB), run=1008-1014msec 00:41:36.953 WRITE: bw=65.9MiB/s (69.1MB/s), 9876KiB/s-22.8MiB/s (10.1MB/s-23.9MB/s), io=66.9MiB (70.1MB), run=1008-1014msec 00:41:36.953 00:41:36.953 Disk stats (read/write): 00:41:36.953 nvme0n1: ios=3238/3584, merge=0/0, ticks=43987/58949, in_queue=102936, util=87.24% 00:41:36.953 nvme0n2: ios=4608/5119, merge=0/0, ticks=49541/51441, in_queue=100982, util=87.54% 00:41:36.953 nvme0n3: ios=1668/2048, merge=0/0, ticks=29537/61950, in_queue=91487, util=88.88% 00:41:36.953 nvme0n4: ios=3584/3999, merge=0/0, ticks=43212/45480, in_queue=88692, util=89.75% 00:41:36.953 13:03:55 -- target/fio.sh@55 -- # sync 00:41:36.953 13:03:56 -- target/fio.sh@59 -- # fio_pid=86795 00:41:36.953 13:03:56 -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:41:36.953 13:03:56 -- target/fio.sh@61 -- # sleep 3 00:41:36.953 [global] 00:41:36.953 thread=1 00:41:36.953 invalidate=1 00:41:36.953 rw=read 00:41:36.953 time_based=1 00:41:36.953 runtime=10 00:41:36.953 ioengine=libaio 00:41:36.953 direct=1 00:41:36.953 bs=4096 00:41:36.953 iodepth=1 00:41:36.953 norandommap=1 00:41:36.953 numjobs=1 00:41:36.953 00:41:36.953 [job0] 00:41:36.953 filename=/dev/nvme0n1 00:41:36.953 [job1] 00:41:36.953 filename=/dev/nvme0n2 00:41:36.953 [job2] 00:41:36.953 filename=/dev/nvme0n3 00:41:36.953 [job3] 00:41:36.953 filename=/dev/nvme0n4 00:41:36.953 Could not set queue depth (nvme0n1) 00:41:36.953 Could not set queue depth (nvme0n2) 00:41:36.953 Could not set queue depth (nvme0n3) 00:41:36.953 Could not set queue depth (nvme0n4) 00:41:36.953 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:36.953 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:36.953 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:36.953 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:36.953 fio-3.35 00:41:36.953 Starting 4 threads 00:41:40.236 13:03:59 -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:41:40.236 fio: pid=86838, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:41:40.236 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=64036864, buflen=4096 00:41:40.236 13:03:59 -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:41:40.236 fio: pid=86837, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:41:40.236 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=43974656, buflen=4096 00:41:40.236 13:03:59 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:41:40.236 13:03:59 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:41:40.494 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=48738304, buflen=4096 00:41:40.494 fio: pid=86835, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:41:40.494 13:03:59 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:41:40.494 13:03:59 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:41:40.753 fio: pid=86836, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:41:40.753 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=11501568, buflen=4096 00:41:40.753 00:41:40.753 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=86835: Mon Jul 22 13:04:00 2024 00:41:40.753 read: IOPS=3503, BW=13.7MiB/s (14.3MB/s)(46.5MiB/3397msec) 00:41:40.753 slat (usec): min=8, max=12027, avg=17.97, stdev=183.64 00:41:40.753 clat (usec): min=3, max=172998, avg=265.92, stdev=1584.75 00:41:40.753 lat (usec): min=131, max=173010, avg=283.89, stdev=1596.00 00:41:40.753 clat percentiles (usec): 00:41:40.753 | 1.00th=[ 139], 5.00th=[ 153], 10.00th=[ 161], 20.00th=[ 219], 00:41:40.753 | 30.00th=[ 247], 40.00th=[ 258], 50.00th=[ 265], 60.00th=[ 273], 00:41:40.753 | 70.00th=[ 277], 80.00th=[ 285], 90.00th=[ 297], 95.00th=[ 306], 00:41:40.753 | 99.00th=[ 322], 99.50th=[ 363], 99.90th=[ 603], 99.95th=[ 1074], 00:41:40.753 | 99.99th=[ 2442] 00:41:40.753 bw ( KiB/s): min= 8872, max=19424, per=22.07%, avg=13888.00, stdev=3343.13, samples=6 00:41:40.753 iops : min= 2218, max= 4856, avg=3472.00, stdev=835.78, samples=6 00:41:40.753 lat (usec) : 4=0.01%, 10=0.01%, 250=32.52%, 500=67.34%, 750=0.05% 00:41:40.753 lat (usec) : 1000=0.01% 00:41:40.753 lat (msec) : 2=0.03%, 4=0.02%, 250=0.01% 00:41:40.753 cpu : usr=1.03%, sys=4.36%, ctx=11928, majf=0, minf=1 00:41:40.753 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:40.753 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:40.753 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:40.753 issued rwts: total=11900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:40.753 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:40.753 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=86836: Mon Jul 22 13:04:00 2024 00:41:40.753 read: IOPS=5254, BW=20.5MiB/s (21.5MB/s)(75.0MiB/3653msec) 00:41:40.753 slat (usec): min=11, max=18773, avg=19.10, stdev=226.79 00:41:40.753 clat (usec): min=114, max=7510, avg=169.81, stdev=91.01 00:41:40.753 lat (usec): min=126, max=19043, avg=188.90, stdev=245.36 00:41:40.753 clat percentiles (usec): 00:41:40.753 | 1.00th=[ 124], 5.00th=[ 133], 10.00th=[ 137], 20.00th=[ 143], 00:41:40.753 | 30.00th=[ 147], 40.00th=[ 151], 50.00th=[ 157], 60.00th=[ 161], 00:41:40.753 | 70.00th=[ 169], 80.00th=[ 178], 90.00th=[ 241], 95.00th=[ 265], 00:41:40.753 | 99.00th=[ 293], 99.50th=[ 306], 99.90th=[ 338], 99.95th=[ 1188], 00:41:40.753 | 99.99th=[ 6325] 00:41:40.753 bw ( KiB/s): min=15992, max=23472, per=33.56%, avg=21118.14, stdev=3079.35, samples=7 00:41:40.753 iops : min= 3998, max= 5868, avg=5279.43, stdev=769.98, samples=7 00:41:40.753 lat (usec) : 250=92.07%, 500=7.85%, 750=0.01%, 1000=0.01% 00:41:40.753 lat (msec) : 2=0.02%, 4=0.03%, 10=0.01% 00:41:40.753 cpu : usr=1.42%, sys=6.68%, ctx=19203, majf=0, minf=1 00:41:40.753 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:40.753 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:40.753 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:40.753 issued rwts: total=19193,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:40.753 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:40.753 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=86837: Mon Jul 22 13:04:00 2024 00:41:40.753 read: IOPS=3388, BW=13.2MiB/s (13.9MB/s)(41.9MiB/3169msec) 00:41:40.753 slat (usec): min=10, max=7755, avg=15.46, stdev=102.56 00:41:40.753 clat (usec): min=135, max=172725, avg=278.14, stdev=1666.78 00:41:40.753 lat (usec): min=148, max=172737, avg=293.60, stdev=1669.82 00:41:40.753 clat percentiles (usec): 00:41:40.753 | 1.00th=[ 147], 5.00th=[ 157], 10.00th=[ 167], 20.00th=[ 249], 00:41:40.754 | 30.00th=[ 260], 40.00th=[ 265], 50.00th=[ 273], 60.00th=[ 277], 00:41:40.754 | 70.00th=[ 285], 80.00th=[ 289], 90.00th=[ 297], 95.00th=[ 306], 00:41:40.754 | 99.00th=[ 326], 99.50th=[ 363], 99.90th=[ 775], 99.95th=[ 1909], 00:41:40.754 | 99.99th=[ 6456] 00:41:40.754 bw ( KiB/s): min=10016, max=13808, per=20.82%, avg=13098.67, stdev=1513.13, samples=6 00:41:40.754 iops : min= 2504, max= 3452, avg=3274.67, stdev=378.28, samples=6 00:41:40.754 lat (usec) : 250=20.14%, 500=79.72%, 750=0.02%, 1000=0.05% 00:41:40.754 lat (msec) : 2=0.02%, 4=0.03%, 10=0.01%, 250=0.01% 00:41:40.754 cpu : usr=0.95%, sys=4.23%, ctx=10757, majf=0, minf=1 00:41:40.754 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:40.754 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:40.754 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:40.754 issued rwts: total=10737,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:40.754 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:40.754 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=86838: Mon Jul 22 13:04:00 2024 00:41:40.754 read: IOPS=5404, BW=21.1MiB/s (22.1MB/s)(61.1MiB/2893msec) 00:41:40.754 slat (nsec): min=11159, max=71165, avg=15770.81, stdev=4220.67 00:41:40.754 clat (usec): min=134, max=883, avg=167.80, stdev=18.61 00:41:40.754 lat (usec): min=148, max=907, avg=183.58, stdev=19.10 00:41:40.754 clat percentiles (usec): 00:41:40.754 | 1.00th=[ 141], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 155], 00:41:40.754 | 30.00th=[ 159], 40.00th=[ 163], 50.00th=[ 167], 60.00th=[ 172], 00:41:40.754 | 70.00th=[ 176], 80.00th=[ 180], 90.00th=[ 188], 95.00th=[ 194], 00:41:40.754 | 99.00th=[ 206], 99.50th=[ 215], 99.90th=[ 285], 99.95th=[ 408], 00:41:40.754 | 99.99th=[ 857] 00:41:40.754 bw ( KiB/s): min=21552, max=21712, per=34.38%, avg=21632.00, stdev=69.74, samples=5 00:41:40.754 iops : min= 5388, max= 5428, avg=5408.00, stdev=17.44, samples=5 00:41:40.754 lat (usec) : 250=99.85%, 500=0.11%, 750=0.02%, 1000=0.01% 00:41:40.754 cpu : usr=1.52%, sys=6.85%, ctx=15637, majf=0, minf=1 00:41:40.754 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:40.754 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:40.754 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:40.754 issued rwts: total=15635,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:40.754 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:40.754 00:41:40.754 Run status group 0 (all jobs): 00:41:40.754 READ: bw=61.4MiB/s (64.4MB/s), 13.2MiB/s-21.1MiB/s (13.9MB/s-22.1MB/s), io=224MiB (235MB), run=2893-3653msec 00:41:40.754 00:41:40.754 Disk stats (read/write): 00:41:40.754 nvme0n1: ios=11789/0, merge=0/0, ticks=3219/0, in_queue=3219, util=95.31% 00:41:40.754 nvme0n2: ios=18974/0, merge=0/0, ticks=3396/0, in_queue=3396, util=94.73% 00:41:40.754 nvme0n3: ios=10458/0, merge=0/0, ticks=3013/0, in_queue=3013, util=96.43% 00:41:40.754 nvme0n4: ios=15517/0, merge=0/0, ticks=2756/0, in_queue=2756, util=96.76% 00:41:40.754 13:04:00 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:41:40.754 13:04:00 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:41:41.012 13:04:00 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:41:41.012 13:04:00 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:41:41.271 13:04:00 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:41:41.271 13:04:00 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:41:41.529 13:04:00 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:41:41.529 13:04:00 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:41:41.805 13:04:01 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:41:41.805 13:04:01 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:41:42.085 13:04:01 -- target/fio.sh@69 -- # fio_status=0 00:41:42.085 13:04:01 -- target/fio.sh@70 -- # wait 86795 00:41:42.085 13:04:01 -- target/fio.sh@70 -- # fio_status=4 00:41:42.085 13:04:01 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:41:42.344 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:41:42.344 13:04:01 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:41:42.344 13:04:01 -- common/autotest_common.sh@1198 -- # local i=0 00:41:42.344 13:04:01 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:41:42.344 13:04:01 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:41:42.344 13:04:01 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:41:42.344 13:04:01 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:41:42.344 nvmf hotplug test: fio failed as expected 00:41:42.344 13:04:01 -- common/autotest_common.sh@1210 -- # return 0 00:41:42.344 13:04:01 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:41:42.344 13:04:01 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:41:42.344 13:04:01 -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:42.603 13:04:01 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:41:42.603 13:04:01 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:41:42.603 13:04:01 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:41:42.603 13:04:01 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:41:42.603 13:04:01 -- target/fio.sh@91 -- # nvmftestfini 00:41:42.603 13:04:01 -- nvmf/common.sh@476 -- # nvmfcleanup 00:41:42.603 13:04:01 -- nvmf/common.sh@116 -- # sync 00:41:42.603 13:04:01 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:41:42.603 13:04:01 -- nvmf/common.sh@119 -- # set +e 00:41:42.603 13:04:01 -- nvmf/common.sh@120 -- # for i in {1..20} 00:41:42.603 13:04:01 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:41:42.603 rmmod nvme_tcp 00:41:42.603 rmmod nvme_fabrics 00:41:42.603 rmmod nvme_keyring 00:41:42.603 13:04:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:41:42.603 13:04:01 -- nvmf/common.sh@123 -- # set -e 00:41:42.603 13:04:01 -- nvmf/common.sh@124 -- # return 0 00:41:42.603 13:04:01 -- nvmf/common.sh@477 -- # '[' -n 86300 ']' 00:41:42.603 13:04:01 -- nvmf/common.sh@478 -- # killprocess 86300 00:41:42.603 13:04:01 -- common/autotest_common.sh@926 -- # '[' -z 86300 ']' 00:41:42.603 13:04:01 -- common/autotest_common.sh@930 -- # kill -0 86300 00:41:42.603 13:04:01 -- common/autotest_common.sh@931 -- # uname 00:41:42.603 13:04:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:41:42.603 13:04:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 86300 00:41:42.603 13:04:01 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:41:42.603 13:04:01 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:41:42.603 killing process with pid 86300 00:41:42.603 13:04:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 86300' 00:41:42.603 13:04:01 -- common/autotest_common.sh@945 -- # kill 86300 00:41:42.603 13:04:01 -- common/autotest_common.sh@950 -- # wait 86300 00:41:42.862 13:04:02 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:41:42.862 13:04:02 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:41:42.862 13:04:02 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:41:42.862 13:04:02 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:41:42.862 13:04:02 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:41:42.862 13:04:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:42.862 13:04:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:41:42.862 13:04:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:42.862 13:04:02 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:41:42.862 00:41:42.862 real 0m19.838s 00:41:42.862 user 1m15.704s 00:41:42.862 sys 0m9.536s 00:41:42.862 13:04:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:41:42.862 13:04:02 -- common/autotest_common.sh@10 -- # set +x 00:41:42.862 ************************************ 00:41:42.862 END TEST nvmf_fio_target 00:41:42.862 ************************************ 00:41:42.862 13:04:02 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:41:42.862 13:04:02 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:41:42.862 13:04:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:41:42.862 13:04:02 -- common/autotest_common.sh@10 -- # set +x 00:41:42.862 ************************************ 00:41:42.862 START TEST nvmf_bdevio 00:41:42.862 ************************************ 00:41:42.862 13:04:02 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:41:42.862 * Looking for test storage... 00:41:42.862 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:41:42.862 13:04:02 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:41:42.862 13:04:02 -- nvmf/common.sh@7 -- # uname -s 00:41:42.862 13:04:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:42.862 13:04:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:42.862 13:04:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:42.862 13:04:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:42.862 13:04:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:42.862 13:04:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:42.862 13:04:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:42.862 13:04:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:42.862 13:04:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:42.862 13:04:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:42.862 13:04:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:864ae4a3-cd96-439b-8ef2-6dab4d992115 00:41:42.862 13:04:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=864ae4a3-cd96-439b-8ef2-6dab4d992115 00:41:42.862 13:04:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:42.862 13:04:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:42.862 13:04:02 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:41:42.862 13:04:02 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:41:42.862 13:04:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:42.862 13:04:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:42.862 13:04:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:42.862 13:04:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:42.862 13:04:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:42.862 13:04:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:42.862 13:04:02 -- paths/export.sh@5 -- # export PATH 00:41:42.862 13:04:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:42.862 13:04:02 -- nvmf/common.sh@46 -- # : 0 00:41:42.862 13:04:02 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:41:42.862 13:04:02 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:41:42.862 13:04:02 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:41:42.862 13:04:02 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:42.862 13:04:02 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:42.862 13:04:02 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:41:42.862 13:04:02 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:41:42.862 13:04:02 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:41:42.862 13:04:02 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:41:42.862 13:04:02 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:41:42.862 13:04:02 -- target/bdevio.sh@14 -- # nvmftestinit 00:41:42.862 13:04:02 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:41:42.862 13:04:02 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:42.862 13:04:02 -- nvmf/common.sh@436 -- # prepare_net_devs 00:41:42.862 13:04:02 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:41:42.862 13:04:02 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:41:42.862 13:04:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:42.862 13:04:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:41:42.862 13:04:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:43.121 13:04:02 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:41:43.121 13:04:02 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:41:43.121 13:04:02 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:41:43.121 13:04:02 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:41:43.121 13:04:02 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:41:43.121 13:04:02 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:41:43.121 13:04:02 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:43.121 13:04:02 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:43.121 13:04:02 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:41:43.121 13:04:02 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:41:43.121 13:04:02 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:41:43.121 13:04:02 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:41:43.121 13:04:02 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:41:43.121 13:04:02 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:43.121 13:04:02 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:41:43.121 13:04:02 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:41:43.121 13:04:02 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:41:43.121 13:04:02 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:41:43.121 13:04:02 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:41:43.121 13:04:02 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:41:43.121 Cannot find device "nvmf_tgt_br" 00:41:43.121 13:04:02 -- nvmf/common.sh@154 -- # true 00:41:43.121 13:04:02 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:41:43.121 Cannot find device "nvmf_tgt_br2" 00:41:43.121 13:04:02 -- nvmf/common.sh@155 -- # true 00:41:43.121 13:04:02 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:41:43.121 13:04:02 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:41:43.121 Cannot find device "nvmf_tgt_br" 00:41:43.121 13:04:02 -- nvmf/common.sh@157 -- # true 00:41:43.121 13:04:02 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:41:43.121 Cannot find device "nvmf_tgt_br2" 00:41:43.121 13:04:02 -- nvmf/common.sh@158 -- # true 00:41:43.121 13:04:02 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:41:43.121 13:04:02 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:41:43.121 13:04:02 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:41:43.121 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:41:43.121 13:04:02 -- nvmf/common.sh@161 -- # true 00:41:43.121 13:04:02 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:41:43.121 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:41:43.121 13:04:02 -- nvmf/common.sh@162 -- # true 00:41:43.121 13:04:02 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:41:43.121 13:04:02 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:41:43.121 13:04:02 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:41:43.121 13:04:02 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:41:43.121 13:04:02 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:41:43.121 13:04:02 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:41:43.121 13:04:02 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:41:43.121 13:04:02 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:41:43.121 13:04:02 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:41:43.121 13:04:02 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:41:43.121 13:04:02 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:41:43.121 13:04:02 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:41:43.121 13:04:02 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:41:43.122 13:04:02 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:41:43.122 13:04:02 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:41:43.122 13:04:02 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:41:43.122 13:04:02 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:41:43.122 13:04:02 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:41:43.122 13:04:02 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:41:43.380 13:04:02 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:41:43.380 13:04:02 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:41:43.380 13:04:02 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:41:43.380 13:04:02 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:41:43.380 13:04:02 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:41:43.380 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:43.380 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:41:43.380 00:41:43.380 --- 10.0.0.2 ping statistics --- 00:41:43.380 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:43.380 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:41:43.380 13:04:02 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:41:43.380 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:41:43.380 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:41:43.380 00:41:43.380 --- 10.0.0.3 ping statistics --- 00:41:43.380 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:43.380 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:41:43.380 13:04:02 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:41:43.380 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:43.380 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:41:43.380 00:41:43.380 --- 10.0.0.1 ping statistics --- 00:41:43.380 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:43.380 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:41:43.380 13:04:02 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:43.380 13:04:02 -- nvmf/common.sh@421 -- # return 0 00:41:43.380 13:04:02 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:41:43.380 13:04:02 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:43.380 13:04:02 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:41:43.380 13:04:02 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:41:43.380 13:04:02 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:43.381 13:04:02 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:41:43.381 13:04:02 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:41:43.381 13:04:02 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:41:43.381 13:04:02 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:41:43.381 13:04:02 -- common/autotest_common.sh@712 -- # xtrace_disable 00:41:43.381 13:04:02 -- common/autotest_common.sh@10 -- # set +x 00:41:43.381 13:04:02 -- nvmf/common.sh@469 -- # nvmfpid=87162 00:41:43.381 13:04:02 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:41:43.381 13:04:02 -- nvmf/common.sh@470 -- # waitforlisten 87162 00:41:43.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:43.381 13:04:02 -- common/autotest_common.sh@819 -- # '[' -z 87162 ']' 00:41:43.381 13:04:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:43.381 13:04:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:41:43.381 13:04:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:43.381 13:04:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:41:43.381 13:04:02 -- common/autotest_common.sh@10 -- # set +x 00:41:43.381 [2024-07-22 13:04:02.678633] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:41:43.381 [2024-07-22 13:04:02.678734] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:43.652 [2024-07-22 13:04:02.820416] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:43.652 [2024-07-22 13:04:02.897300] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:41:43.652 [2024-07-22 13:04:02.897704] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:43.652 [2024-07-22 13:04:02.897761] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:43.652 [2024-07-22 13:04:02.897888] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:43.652 [2024-07-22 13:04:02.898426] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:41:43.652 [2024-07-22 13:04:02.898555] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:41:43.652 [2024-07-22 13:04:02.898619] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:41:43.652 [2024-07-22 13:04:02.898623] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:41:44.589 13:04:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:41:44.589 13:04:03 -- common/autotest_common.sh@852 -- # return 0 00:41:44.589 13:04:03 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:41:44.589 13:04:03 -- common/autotest_common.sh@718 -- # xtrace_disable 00:41:44.589 13:04:03 -- common/autotest_common.sh@10 -- # set +x 00:41:44.589 13:04:03 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:44.589 13:04:03 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:41:44.589 13:04:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:41:44.589 13:04:03 -- common/autotest_common.sh@10 -- # set +x 00:41:44.589 [2024-07-22 13:04:03.709873] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:44.589 13:04:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:41:44.589 13:04:03 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:41:44.589 13:04:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:41:44.589 13:04:03 -- common/autotest_common.sh@10 -- # set +x 00:41:44.589 Malloc0 00:41:44.589 13:04:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:41:44.589 13:04:03 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:41:44.589 13:04:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:41:44.589 13:04:03 -- common/autotest_common.sh@10 -- # set +x 00:41:44.589 13:04:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:41:44.589 13:04:03 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:41:44.589 13:04:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:41:44.589 13:04:03 -- common/autotest_common.sh@10 -- # set +x 00:41:44.589 13:04:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:41:44.589 13:04:03 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:44.589 13:04:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:41:44.589 13:04:03 -- common/autotest_common.sh@10 -- # set +x 00:41:44.589 [2024-07-22 13:04:03.779213] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:44.589 13:04:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:41:44.589 13:04:03 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:41:44.589 13:04:03 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:41:44.589 13:04:03 -- nvmf/common.sh@520 -- # config=() 00:41:44.589 13:04:03 -- nvmf/common.sh@520 -- # local subsystem config 00:41:44.589 13:04:03 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:41:44.589 13:04:03 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:41:44.589 { 00:41:44.589 "params": { 00:41:44.589 "name": "Nvme$subsystem", 00:41:44.589 "trtype": "$TEST_TRANSPORT", 00:41:44.589 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:44.589 "adrfam": "ipv4", 00:41:44.589 "trsvcid": "$NVMF_PORT", 00:41:44.589 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:44.589 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:44.589 "hdgst": ${hdgst:-false}, 00:41:44.589 "ddgst": ${ddgst:-false} 00:41:44.589 }, 00:41:44.589 "method": "bdev_nvme_attach_controller" 00:41:44.589 } 00:41:44.589 EOF 00:41:44.589 )") 00:41:44.589 13:04:03 -- nvmf/common.sh@542 -- # cat 00:41:44.589 13:04:03 -- nvmf/common.sh@544 -- # jq . 00:41:44.589 13:04:03 -- nvmf/common.sh@545 -- # IFS=, 00:41:44.589 13:04:03 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:41:44.589 "params": { 00:41:44.589 "name": "Nvme1", 00:41:44.589 "trtype": "tcp", 00:41:44.589 "traddr": "10.0.0.2", 00:41:44.589 "adrfam": "ipv4", 00:41:44.589 "trsvcid": "4420", 00:41:44.589 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:44.589 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:44.589 "hdgst": false, 00:41:44.589 "ddgst": false 00:41:44.589 }, 00:41:44.589 "method": "bdev_nvme_attach_controller" 00:41:44.589 }' 00:41:44.589 [2024-07-22 13:04:03.836031] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:41:44.589 [2024-07-22 13:04:03.836116] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87216 ] 00:41:44.589 [2024-07-22 13:04:03.978185] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:41:44.847 [2024-07-22 13:04:04.052956] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:41:44.847 [2024-07-22 13:04:04.053031] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:41:44.847 [2024-07-22 13:04:04.053030] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:41:44.847 [2024-07-22 13:04:04.225770] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:41:44.847 [2024-07-22 13:04:04.226310] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:41:44.847 I/O targets: 00:41:44.847 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:41:44.847 00:41:44.847 00:41:44.847 CUnit - A unit testing framework for C - Version 2.1-3 00:41:44.847 http://cunit.sourceforge.net/ 00:41:44.847 00:41:44.847 00:41:44.848 Suite: bdevio tests on: Nvme1n1 00:41:45.106 Test: blockdev write read block ...passed 00:41:45.106 Test: blockdev write zeroes read block ...passed 00:41:45.106 Test: blockdev write zeroes read no split ...passed 00:41:45.106 Test: blockdev write zeroes read split ...passed 00:41:45.106 Test: blockdev write zeroes read split partial ...passed 00:41:45.106 Test: blockdev reset ...[2024-07-22 13:04:04.342080] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:45.106 [2024-07-22 13:04:04.342414] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1534210 (9): Bad file descriptor 00:41:45.106 [2024-07-22 13:04:04.355602] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:41:45.106 passed 00:41:45.106 Test: blockdev write read 8 blocks ...passed 00:41:45.106 Test: blockdev write read size > 128k ...passed 00:41:45.106 Test: blockdev write read invalid size ...passed 00:41:45.106 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:41:45.106 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:41:45.106 Test: blockdev write read max offset ...passed 00:41:45.106 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:41:45.106 Test: blockdev writev readv 8 blocks ...passed 00:41:45.106 Test: blockdev writev readv 30 x 1block ...passed 00:41:45.106 Test: blockdev writev readv block ...passed 00:41:45.365 Test: blockdev writev readv size > 128k ...passed 00:41:45.365 Test: blockdev writev readv size > 128k in two iovs ...passed 00:41:45.365 Test: blockdev comparev and writev ...[2024-07-22 13:04:04.531329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:45.365 [2024-07-22 13:04:04.531516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:45.365 [2024-07-22 13:04:04.531544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:45.365 [2024-07-22 13:04:04.531557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:41:45.365 [2024-07-22 13:04:04.531927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:45.365 [2024-07-22 13:04:04.531949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:41:45.365 [2024-07-22 13:04:04.531968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:45.365 [2024-07-22 13:04:04.531978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:41:45.365 [2024-07-22 13:04:04.532288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:45.365 [2024-07-22 13:04:04.532310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:41:45.365 [2024-07-22 13:04:04.532328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:45.365 [2024-07-22 13:04:04.532338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:41:45.365 [2024-07-22 13:04:04.532780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:45.365 [2024-07-22 13:04:04.532807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:41:45.365 [2024-07-22 13:04:04.532826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:45.365 [2024-07-22 13:04:04.532836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:41:45.365 passed 00:41:45.365 Test: blockdev nvme passthru rw ...passed 00:41:45.365 Test: blockdev nvme passthru vendor specific ...[2024-07-22 13:04:04.616942] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:41:45.365 [2024-07-22 13:04:04.616977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:41:45.365 [2024-07-22 13:04:04.617131] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:41:45.365 [2024-07-22 13:04:04.617184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:41:45.365 [2024-07-22 13:04:04.617321] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:41:45.365 [2024-07-22 13:04:04.617343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:41:45.365 [2024-07-22 13:04:04.617465] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:41:45.365 [2024-07-22 13:04:04.617487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:41:45.365 passed 00:41:45.365 Test: blockdev nvme admin passthru ...passed 00:41:45.365 Test: blockdev copy ...passed 00:41:45.365 00:41:45.365 Run Summary: Type Total Ran Passed Failed Inactive 00:41:45.365 suites 1 1 n/a 0 0 00:41:45.365 tests 23 23 23 0 0 00:41:45.365 asserts 152 152 152 0 n/a 00:41:45.365 00:41:45.365 Elapsed time = 0.894 seconds 00:41:45.624 13:04:04 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:45.624 13:04:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:41:45.624 13:04:04 -- common/autotest_common.sh@10 -- # set +x 00:41:45.624 13:04:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:41:45.624 13:04:04 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:41:45.624 13:04:04 -- target/bdevio.sh@30 -- # nvmftestfini 00:41:45.624 13:04:04 -- nvmf/common.sh@476 -- # nvmfcleanup 00:41:45.624 13:04:04 -- nvmf/common.sh@116 -- # sync 00:41:45.624 13:04:04 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:41:45.624 13:04:04 -- nvmf/common.sh@119 -- # set +e 00:41:45.624 13:04:04 -- nvmf/common.sh@120 -- # for i in {1..20} 00:41:45.624 13:04:04 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:41:45.624 rmmod nvme_tcp 00:41:45.624 rmmod nvme_fabrics 00:41:45.624 rmmod nvme_keyring 00:41:45.624 13:04:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:41:45.624 13:04:04 -- nvmf/common.sh@123 -- # set -e 00:41:45.624 13:04:04 -- nvmf/common.sh@124 -- # return 0 00:41:45.624 13:04:04 -- nvmf/common.sh@477 -- # '[' -n 87162 ']' 00:41:45.624 13:04:04 -- nvmf/common.sh@478 -- # killprocess 87162 00:41:45.624 13:04:04 -- common/autotest_common.sh@926 -- # '[' -z 87162 ']' 00:41:45.624 13:04:04 -- common/autotest_common.sh@930 -- # kill -0 87162 00:41:45.624 13:04:04 -- common/autotest_common.sh@931 -- # uname 00:41:45.624 13:04:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:41:45.624 13:04:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 87162 00:41:45.624 killing process with pid 87162 00:41:45.624 13:04:04 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:41:45.624 13:04:04 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:41:45.624 13:04:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 87162' 00:41:45.624 13:04:04 -- common/autotest_common.sh@945 -- # kill 87162 00:41:45.624 13:04:04 -- common/autotest_common.sh@950 -- # wait 87162 00:41:45.883 13:04:05 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:41:45.883 13:04:05 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:41:45.883 13:04:05 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:41:45.883 13:04:05 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:41:45.883 13:04:05 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:41:45.883 13:04:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:45.883 13:04:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:41:45.883 13:04:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:45.883 13:04:05 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:41:45.883 ************************************ 00:41:45.883 END TEST nvmf_bdevio 00:41:45.883 ************************************ 00:41:45.883 00:41:45.883 real 0m3.072s 00:41:45.883 user 0m11.272s 00:41:45.883 sys 0m0.752s 00:41:45.883 13:04:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:41:45.883 13:04:05 -- common/autotest_common.sh@10 -- # set +x 00:41:45.883 13:04:05 -- nvmf/nvmf.sh@57 -- # '[' tcp = tcp ']' 00:41:45.883 13:04:05 -- nvmf/nvmf.sh@58 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:41:45.883 13:04:05 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:41:45.883 13:04:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:41:45.883 13:04:05 -- common/autotest_common.sh@10 -- # set +x 00:41:46.142 ************************************ 00:41:46.142 START TEST nvmf_bdevio_no_huge 00:41:46.142 ************************************ 00:41:46.142 13:04:05 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:41:46.142 * Looking for test storage... 00:41:46.142 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:41:46.142 13:04:05 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:41:46.143 13:04:05 -- nvmf/common.sh@7 -- # uname -s 00:41:46.143 13:04:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:46.143 13:04:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:46.143 13:04:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:46.143 13:04:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:46.143 13:04:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:46.143 13:04:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:46.143 13:04:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:46.143 13:04:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:46.143 13:04:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:46.143 13:04:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:46.143 13:04:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:864ae4a3-cd96-439b-8ef2-6dab4d992115 00:41:46.143 13:04:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=864ae4a3-cd96-439b-8ef2-6dab4d992115 00:41:46.143 13:04:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:46.143 13:04:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:46.143 13:04:05 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:41:46.143 13:04:05 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:41:46.143 13:04:05 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:46.143 13:04:05 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:46.143 13:04:05 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:46.143 13:04:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:46.143 13:04:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:46.143 13:04:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:46.143 13:04:05 -- paths/export.sh@5 -- # export PATH 00:41:46.143 13:04:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:46.143 13:04:05 -- nvmf/common.sh@46 -- # : 0 00:41:46.143 13:04:05 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:41:46.143 13:04:05 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:41:46.143 13:04:05 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:41:46.143 13:04:05 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:46.143 13:04:05 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:46.143 13:04:05 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:41:46.143 13:04:05 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:41:46.143 13:04:05 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:41:46.143 13:04:05 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:41:46.143 13:04:05 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:41:46.143 13:04:05 -- target/bdevio.sh@14 -- # nvmftestinit 00:41:46.143 13:04:05 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:41:46.143 13:04:05 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:46.143 13:04:05 -- nvmf/common.sh@436 -- # prepare_net_devs 00:41:46.143 13:04:05 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:41:46.143 13:04:05 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:41:46.143 13:04:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:46.143 13:04:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:41:46.143 13:04:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:46.143 13:04:05 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:41:46.143 13:04:05 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:41:46.143 13:04:05 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:41:46.143 13:04:05 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:41:46.143 13:04:05 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:41:46.143 13:04:05 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:41:46.143 13:04:05 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:46.143 13:04:05 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:46.143 13:04:05 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:41:46.143 13:04:05 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:41:46.143 13:04:05 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:41:46.143 13:04:05 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:41:46.143 13:04:05 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:41:46.143 13:04:05 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:46.143 13:04:05 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:41:46.143 13:04:05 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:41:46.143 13:04:05 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:41:46.143 13:04:05 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:41:46.143 13:04:05 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:41:46.143 13:04:05 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:41:46.143 Cannot find device "nvmf_tgt_br" 00:41:46.143 13:04:05 -- nvmf/common.sh@154 -- # true 00:41:46.143 13:04:05 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:41:46.143 Cannot find device "nvmf_tgt_br2" 00:41:46.143 13:04:05 -- nvmf/common.sh@155 -- # true 00:41:46.143 13:04:05 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:41:46.143 13:04:05 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:41:46.143 Cannot find device "nvmf_tgt_br" 00:41:46.143 13:04:05 -- nvmf/common.sh@157 -- # true 00:41:46.143 13:04:05 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:41:46.143 Cannot find device "nvmf_tgt_br2" 00:41:46.143 13:04:05 -- nvmf/common.sh@158 -- # true 00:41:46.143 13:04:05 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:41:46.143 13:04:05 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:41:46.143 13:04:05 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:41:46.143 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:41:46.143 13:04:05 -- nvmf/common.sh@161 -- # true 00:41:46.143 13:04:05 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:41:46.143 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:41:46.143 13:04:05 -- nvmf/common.sh@162 -- # true 00:41:46.143 13:04:05 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:41:46.143 13:04:05 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:41:46.402 13:04:05 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:41:46.402 13:04:05 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:41:46.402 13:04:05 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:41:46.402 13:04:05 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:41:46.402 13:04:05 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:41:46.402 13:04:05 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:41:46.402 13:04:05 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:41:46.402 13:04:05 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:41:46.402 13:04:05 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:41:46.402 13:04:05 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:41:46.402 13:04:05 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:41:46.402 13:04:05 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:41:46.402 13:04:05 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:41:46.402 13:04:05 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:41:46.402 13:04:05 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:41:46.402 13:04:05 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:41:46.402 13:04:05 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:41:46.402 13:04:05 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:41:46.402 13:04:05 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:41:46.402 13:04:05 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:41:46.402 13:04:05 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:41:46.402 13:04:05 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:41:46.402 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:46.402 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:41:46.402 00:41:46.402 --- 10.0.0.2 ping statistics --- 00:41:46.402 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:46.402 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:41:46.402 13:04:05 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:41:46.402 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:41:46.402 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:41:46.402 00:41:46.402 --- 10.0.0.3 ping statistics --- 00:41:46.402 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:46.402 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:41:46.402 13:04:05 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:41:46.402 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:46.402 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:41:46.402 00:41:46.402 --- 10.0.0.1 ping statistics --- 00:41:46.402 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:46.402 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:41:46.402 13:04:05 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:46.402 13:04:05 -- nvmf/common.sh@421 -- # return 0 00:41:46.402 13:04:05 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:41:46.402 13:04:05 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:46.402 13:04:05 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:41:46.402 13:04:05 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:41:46.402 13:04:05 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:46.402 13:04:05 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:41:46.402 13:04:05 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:41:46.402 13:04:05 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:41:46.402 13:04:05 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:41:46.402 13:04:05 -- common/autotest_common.sh@712 -- # xtrace_disable 00:41:46.402 13:04:05 -- common/autotest_common.sh@10 -- # set +x 00:41:46.402 13:04:05 -- nvmf/common.sh@469 -- # nvmfpid=87394 00:41:46.402 13:04:05 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:41:46.402 13:04:05 -- nvmf/common.sh@470 -- # waitforlisten 87394 00:41:46.402 13:04:05 -- common/autotest_common.sh@819 -- # '[' -z 87394 ']' 00:41:46.402 13:04:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:46.402 13:04:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:41:46.402 13:04:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:46.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:46.402 13:04:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:41:46.402 13:04:05 -- common/autotest_common.sh@10 -- # set +x 00:41:46.661 [2024-07-22 13:04:05.853184] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:41:46.661 [2024-07-22 13:04:05.853299] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:41:46.661 [2024-07-22 13:04:05.998559] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:46.920 [2024-07-22 13:04:06.083985] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:41:46.920 [2024-07-22 13:04:06.084300] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:46.920 [2024-07-22 13:04:06.084419] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:46.920 [2024-07-22 13:04:06.084666] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:46.920 [2024-07-22 13:04:06.085023] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:41:46.920 [2024-07-22 13:04:06.085184] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:41:46.920 [2024-07-22 13:04:06.085381] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:41:46.920 [2024-07-22 13:04:06.085385] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:41:47.488 13:04:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:41:47.488 13:04:06 -- common/autotest_common.sh@852 -- # return 0 00:41:47.488 13:04:06 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:41:47.488 13:04:06 -- common/autotest_common.sh@718 -- # xtrace_disable 00:41:47.488 13:04:06 -- common/autotest_common.sh@10 -- # set +x 00:41:47.488 13:04:06 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:47.488 13:04:06 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:41:47.488 13:04:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:41:47.488 13:04:06 -- common/autotest_common.sh@10 -- # set +x 00:41:47.488 [2024-07-22 13:04:06.832717] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:47.488 13:04:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:41:47.488 13:04:06 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:41:47.488 13:04:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:41:47.488 13:04:06 -- common/autotest_common.sh@10 -- # set +x 00:41:47.488 Malloc0 00:41:47.488 13:04:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:41:47.488 13:04:06 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:41:47.488 13:04:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:41:47.488 13:04:06 -- common/autotest_common.sh@10 -- # set +x 00:41:47.488 13:04:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:41:47.488 13:04:06 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:41:47.488 13:04:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:41:47.488 13:04:06 -- common/autotest_common.sh@10 -- # set +x 00:41:47.488 13:04:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:41:47.488 13:04:06 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:47.488 13:04:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:41:47.488 13:04:06 -- common/autotest_common.sh@10 -- # set +x 00:41:47.488 [2024-07-22 13:04:06.876956] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:47.488 13:04:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:41:47.488 13:04:06 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:41:47.488 13:04:06 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:41:47.488 13:04:06 -- nvmf/common.sh@520 -- # config=() 00:41:47.488 13:04:06 -- nvmf/common.sh@520 -- # local subsystem config 00:41:47.488 13:04:06 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:41:47.488 13:04:06 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:41:47.488 { 00:41:47.488 "params": { 00:41:47.488 "name": "Nvme$subsystem", 00:41:47.488 "trtype": "$TEST_TRANSPORT", 00:41:47.488 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:47.488 "adrfam": "ipv4", 00:41:47.488 "trsvcid": "$NVMF_PORT", 00:41:47.488 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:47.488 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:47.488 "hdgst": ${hdgst:-false}, 00:41:47.488 "ddgst": ${ddgst:-false} 00:41:47.488 }, 00:41:47.488 "method": "bdev_nvme_attach_controller" 00:41:47.488 } 00:41:47.488 EOF 00:41:47.488 )") 00:41:47.488 13:04:06 -- nvmf/common.sh@542 -- # cat 00:41:47.488 13:04:06 -- nvmf/common.sh@544 -- # jq . 00:41:47.488 13:04:06 -- nvmf/common.sh@545 -- # IFS=, 00:41:47.488 13:04:06 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:41:47.488 "params": { 00:41:47.488 "name": "Nvme1", 00:41:47.488 "trtype": "tcp", 00:41:47.488 "traddr": "10.0.0.2", 00:41:47.488 "adrfam": "ipv4", 00:41:47.488 "trsvcid": "4420", 00:41:47.488 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:47.488 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:47.488 "hdgst": false, 00:41:47.488 "ddgst": false 00:41:47.488 }, 00:41:47.488 "method": "bdev_nvme_attach_controller" 00:41:47.488 }' 00:41:47.748 [2024-07-22 13:04:06.932747] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:41:47.748 [2024-07-22 13:04:06.932843] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid87444 ] 00:41:47.748 [2024-07-22 13:04:07.073167] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:41:48.007 [2024-07-22 13:04:07.209161] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:41:48.007 [2024-07-22 13:04:07.209280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:41:48.007 [2024-07-22 13:04:07.209286] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:41:48.007 [2024-07-22 13:04:07.419790] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:41:48.007 [2024-07-22 13:04:07.420073] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:41:48.007 I/O targets: 00:41:48.007 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:41:48.007 00:41:48.007 00:41:48.007 CUnit - A unit testing framework for C - Version 2.1-3 00:41:48.007 http://cunit.sourceforge.net/ 00:41:48.007 00:41:48.007 00:41:48.007 Suite: bdevio tests on: Nvme1n1 00:41:48.266 Test: blockdev write read block ...passed 00:41:48.266 Test: blockdev write zeroes read block ...passed 00:41:48.266 Test: blockdev write zeroes read no split ...passed 00:41:48.266 Test: blockdev write zeroes read split ...passed 00:41:48.266 Test: blockdev write zeroes read split partial ...passed 00:41:48.266 Test: blockdev reset ...[2024-07-22 13:04:07.545406] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:48.266 [2024-07-22 13:04:07.545675] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb43e0 (9): Bad file descriptor 00:41:48.266 [2024-07-22 13:04:07.556856] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:41:48.266 passed 00:41:48.266 Test: blockdev write read 8 blocks ...passed 00:41:48.266 Test: blockdev write read size > 128k ...passed 00:41:48.266 Test: blockdev write read invalid size ...passed 00:41:48.266 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:41:48.266 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:41:48.266 Test: blockdev write read max offset ...passed 00:41:48.266 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:41:48.266 Test: blockdev writev readv 8 blocks ...passed 00:41:48.266 Test: blockdev writev readv 30 x 1block ...passed 00:41:48.525 Test: blockdev writev readv block ...passed 00:41:48.525 Test: blockdev writev readv size > 128k ...passed 00:41:48.525 Test: blockdev writev readv size > 128k in two iovs ...passed 00:41:48.525 Test: blockdev comparev and writev ...[2024-07-22 13:04:07.730057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:48.525 [2024-07-22 13:04:07.730098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:48.525 [2024-07-22 13:04:07.730118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:48.525 [2024-07-22 13:04:07.730129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:41:48.525 [2024-07-22 13:04:07.730589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:48.525 [2024-07-22 13:04:07.730617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:41:48.525 [2024-07-22 13:04:07.730636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:48.525 [2024-07-22 13:04:07.730645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:41:48.525 [2024-07-22 13:04:07.731223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:48.525 [2024-07-22 13:04:07.731244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:41:48.525 [2024-07-22 13:04:07.731260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:48.525 [2024-07-22 13:04:07.731271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:41:48.525 [2024-07-22 13:04:07.731558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:48.525 [2024-07-22 13:04:07.731574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:41:48.525 [2024-07-22 13:04:07.731590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:48.525 [2024-07-22 13:04:07.731600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:41:48.525 passed 00:41:48.525 Test: blockdev nvme passthru rw ...passed 00:41:48.525 Test: blockdev nvme passthru vendor specific ...[2024-07-22 13:04:07.813490] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:41:48.525 [2024-07-22 13:04:07.813514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:41:48.525 [2024-07-22 13:04:07.813674] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:41:48.525 [2024-07-22 13:04:07.813689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:41:48.525 passed 00:41:48.525 Test: blockdev nvme admin passthru ...[2024-07-22 13:04:07.814050] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:41:48.525 [2024-07-22 13:04:07.814070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:41:48.525 [2024-07-22 13:04:07.814203] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:41:48.525 [2024-07-22 13:04:07.814219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:41:48.525 passed 00:41:48.525 Test: blockdev copy ...passed 00:41:48.525 00:41:48.525 Run Summary: Type Total Ran Passed Failed Inactive 00:41:48.525 suites 1 1 n/a 0 0 00:41:48.525 tests 23 23 23 0 0 00:41:48.525 asserts 152 152 152 0 n/a 00:41:48.525 00:41:48.525 Elapsed time = 0.899 seconds 00:41:48.784 13:04:08 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:48.784 13:04:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:41:48.784 13:04:08 -- common/autotest_common.sh@10 -- # set +x 00:41:48.784 13:04:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:41:48.784 13:04:08 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:41:48.784 13:04:08 -- target/bdevio.sh@30 -- # nvmftestfini 00:41:48.784 13:04:08 -- nvmf/common.sh@476 -- # nvmfcleanup 00:41:48.784 13:04:08 -- nvmf/common.sh@116 -- # sync 00:41:49.043 13:04:08 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:41:49.043 13:04:08 -- nvmf/common.sh@119 -- # set +e 00:41:49.043 13:04:08 -- nvmf/common.sh@120 -- # for i in {1..20} 00:41:49.043 13:04:08 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:41:49.043 rmmod nvme_tcp 00:41:49.043 rmmod nvme_fabrics 00:41:49.043 rmmod nvme_keyring 00:41:49.043 13:04:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:41:49.043 13:04:08 -- nvmf/common.sh@123 -- # set -e 00:41:49.043 13:04:08 -- nvmf/common.sh@124 -- # return 0 00:41:49.043 13:04:08 -- nvmf/common.sh@477 -- # '[' -n 87394 ']' 00:41:49.043 13:04:08 -- nvmf/common.sh@478 -- # killprocess 87394 00:41:49.043 13:04:08 -- common/autotest_common.sh@926 -- # '[' -z 87394 ']' 00:41:49.043 13:04:08 -- common/autotest_common.sh@930 -- # kill -0 87394 00:41:49.043 13:04:08 -- common/autotest_common.sh@931 -- # uname 00:41:49.043 13:04:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:41:49.043 13:04:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 87394 00:41:49.043 13:04:08 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:41:49.043 killing process with pid 87394 00:41:49.043 13:04:08 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:41:49.043 13:04:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 87394' 00:41:49.043 13:04:08 -- common/autotest_common.sh@945 -- # kill 87394 00:41:49.043 13:04:08 -- common/autotest_common.sh@950 -- # wait 87394 00:41:49.302 13:04:08 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:41:49.302 13:04:08 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:41:49.302 13:04:08 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:41:49.302 13:04:08 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:41:49.302 13:04:08 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:41:49.302 13:04:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:49.302 13:04:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:41:49.302 13:04:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:49.302 13:04:08 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:41:49.302 00:41:49.302 real 0m3.388s 00:41:49.302 user 0m12.204s 00:41:49.302 sys 0m1.265s 00:41:49.302 13:04:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:41:49.302 ************************************ 00:41:49.302 END TEST nvmf_bdevio_no_huge 00:41:49.302 ************************************ 00:41:49.302 13:04:08 -- common/autotest_common.sh@10 -- # set +x 00:41:49.561 13:04:08 -- nvmf/nvmf.sh@59 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:41:49.561 13:04:08 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:41:49.561 13:04:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:41:49.561 13:04:08 -- common/autotest_common.sh@10 -- # set +x 00:41:49.561 ************************************ 00:41:49.561 START TEST nvmf_tls 00:41:49.561 ************************************ 00:41:49.561 13:04:08 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:41:49.561 * Looking for test storage... 00:41:49.561 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:41:49.561 13:04:08 -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:41:49.561 13:04:08 -- nvmf/common.sh@7 -- # uname -s 00:41:49.561 13:04:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:49.561 13:04:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:49.561 13:04:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:49.561 13:04:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:49.561 13:04:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:49.561 13:04:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:49.561 13:04:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:49.561 13:04:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:49.561 13:04:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:49.561 13:04:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:49.561 13:04:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:864ae4a3-cd96-439b-8ef2-6dab4d992115 00:41:49.561 13:04:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=864ae4a3-cd96-439b-8ef2-6dab4d992115 00:41:49.561 13:04:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:49.561 13:04:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:49.561 13:04:08 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:41:49.561 13:04:08 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:41:49.561 13:04:08 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:49.561 13:04:08 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:49.561 13:04:08 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:49.561 13:04:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:49.561 13:04:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:49.561 13:04:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:49.561 13:04:08 -- paths/export.sh@5 -- # export PATH 00:41:49.562 13:04:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:49.562 13:04:08 -- nvmf/common.sh@46 -- # : 0 00:41:49.562 13:04:08 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:41:49.562 13:04:08 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:41:49.562 13:04:08 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:41:49.562 13:04:08 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:49.562 13:04:08 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:49.562 13:04:08 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:41:49.562 13:04:08 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:41:49.562 13:04:08 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:41:49.562 13:04:08 -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:41:49.562 13:04:08 -- target/tls.sh@71 -- # nvmftestinit 00:41:49.562 13:04:08 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:41:49.562 13:04:08 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:49.562 13:04:08 -- nvmf/common.sh@436 -- # prepare_net_devs 00:41:49.562 13:04:08 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:41:49.562 13:04:08 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:41:49.562 13:04:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:49.562 13:04:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:41:49.562 13:04:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:49.562 13:04:08 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:41:49.562 13:04:08 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:41:49.562 13:04:08 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:41:49.562 13:04:08 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:41:49.562 13:04:08 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:41:49.562 13:04:08 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:41:49.562 13:04:08 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:49.562 13:04:08 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:49.562 13:04:08 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:41:49.562 13:04:08 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:41:49.562 13:04:08 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:41:49.562 13:04:08 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:41:49.562 13:04:08 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:41:49.562 13:04:08 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:49.562 13:04:08 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:41:49.562 13:04:08 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:41:49.562 13:04:08 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:41:49.562 13:04:08 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:41:49.562 13:04:08 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:41:49.562 13:04:08 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:41:49.562 Cannot find device "nvmf_tgt_br" 00:41:49.562 13:04:08 -- nvmf/common.sh@154 -- # true 00:41:49.562 13:04:08 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:41:49.562 Cannot find device "nvmf_tgt_br2" 00:41:49.562 13:04:08 -- nvmf/common.sh@155 -- # true 00:41:49.562 13:04:08 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:41:49.562 13:04:08 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:41:49.562 Cannot find device "nvmf_tgt_br" 00:41:49.562 13:04:08 -- nvmf/common.sh@157 -- # true 00:41:49.562 13:04:08 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:41:49.562 Cannot find device "nvmf_tgt_br2" 00:41:49.562 13:04:08 -- nvmf/common.sh@158 -- # true 00:41:49.562 13:04:08 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:41:49.562 13:04:08 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:41:49.562 13:04:08 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:41:49.562 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:41:49.562 13:04:08 -- nvmf/common.sh@161 -- # true 00:41:49.562 13:04:08 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:41:49.562 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:41:49.562 13:04:08 -- nvmf/common.sh@162 -- # true 00:41:49.562 13:04:08 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:41:49.826 13:04:08 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:41:49.826 13:04:08 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:41:49.826 13:04:09 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:41:49.826 13:04:09 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:41:49.826 13:04:09 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:41:49.826 13:04:09 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:41:49.826 13:04:09 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:41:49.826 13:04:09 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:41:49.826 13:04:09 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:41:49.826 13:04:09 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:41:49.826 13:04:09 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:41:49.826 13:04:09 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:41:49.826 13:04:09 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:41:49.826 13:04:09 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:41:49.826 13:04:09 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:41:49.826 13:04:09 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:41:49.826 13:04:09 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:41:49.826 13:04:09 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:41:49.826 13:04:09 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:41:49.826 13:04:09 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:41:49.826 13:04:09 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:41:49.826 13:04:09 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:41:49.826 13:04:09 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:41:49.826 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:49.826 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:41:49.826 00:41:49.826 --- 10.0.0.2 ping statistics --- 00:41:49.826 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:49.826 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:41:49.826 13:04:09 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:41:49.826 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:41:49.826 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.033 ms 00:41:49.826 00:41:49.826 --- 10.0.0.3 ping statistics --- 00:41:49.826 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:49.826 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:41:49.826 13:04:09 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:41:49.826 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:49.826 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:41:49.826 00:41:49.826 --- 10.0.0.1 ping statistics --- 00:41:49.826 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:49.826 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:41:49.826 13:04:09 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:49.826 13:04:09 -- nvmf/common.sh@421 -- # return 0 00:41:49.826 13:04:09 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:41:49.826 13:04:09 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:49.826 13:04:09 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:41:49.826 13:04:09 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:41:49.826 13:04:09 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:49.827 13:04:09 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:41:49.827 13:04:09 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:41:49.827 13:04:09 -- target/tls.sh@72 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:41:49.827 13:04:09 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:41:49.827 13:04:09 -- common/autotest_common.sh@712 -- # xtrace_disable 00:41:49.827 13:04:09 -- common/autotest_common.sh@10 -- # set +x 00:41:49.827 13:04:09 -- nvmf/common.sh@469 -- # nvmfpid=87635 00:41:49.827 13:04:09 -- nvmf/common.sh@470 -- # waitforlisten 87635 00:41:49.827 13:04:09 -- common/autotest_common.sh@819 -- # '[' -z 87635 ']' 00:41:49.827 13:04:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:49.827 13:04:09 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:41:49.827 13:04:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:41:49.827 13:04:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:49.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:49.827 13:04:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:41:49.827 13:04:09 -- common/autotest_common.sh@10 -- # set +x 00:41:50.093 [2024-07-22 13:04:09.254614] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:41:50.093 [2024-07-22 13:04:09.254702] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:50.093 [2024-07-22 13:04:09.397056] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:50.093 [2024-07-22 13:04:09.469562] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:41:50.093 [2024-07-22 13:04:09.469747] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:50.093 [2024-07-22 13:04:09.469762] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:50.093 [2024-07-22 13:04:09.469774] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:50.093 [2024-07-22 13:04:09.469804] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:41:51.029 13:04:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:41:51.029 13:04:10 -- common/autotest_common.sh@852 -- # return 0 00:41:51.029 13:04:10 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:41:51.029 13:04:10 -- common/autotest_common.sh@718 -- # xtrace_disable 00:41:51.029 13:04:10 -- common/autotest_common.sh@10 -- # set +x 00:41:51.029 13:04:10 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:51.029 13:04:10 -- target/tls.sh@74 -- # '[' tcp '!=' tcp ']' 00:41:51.029 13:04:10 -- target/tls.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:41:51.029 true 00:41:51.029 13:04:10 -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:41:51.029 13:04:10 -- target/tls.sh@82 -- # jq -r .tls_version 00:41:51.287 13:04:10 -- target/tls.sh@82 -- # version=0 00:41:51.287 13:04:10 -- target/tls.sh@83 -- # [[ 0 != \0 ]] 00:41:51.287 13:04:10 -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:41:51.546 13:04:10 -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:41:51.546 13:04:10 -- target/tls.sh@90 -- # jq -r .tls_version 00:41:51.805 13:04:11 -- target/tls.sh@90 -- # version=13 00:41:51.805 13:04:11 -- target/tls.sh@91 -- # [[ 13 != \1\3 ]] 00:41:51.805 13:04:11 -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:41:52.064 13:04:11 -- target/tls.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:41:52.064 13:04:11 -- target/tls.sh@98 -- # jq -r .tls_version 00:41:52.322 13:04:11 -- target/tls.sh@98 -- # version=7 00:41:52.322 13:04:11 -- target/tls.sh@99 -- # [[ 7 != \7 ]] 00:41:52.322 13:04:11 -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:41:52.322 13:04:11 -- target/tls.sh@105 -- # jq -r .enable_ktls 00:41:52.581 13:04:11 -- target/tls.sh@105 -- # ktls=false 00:41:52.581 13:04:11 -- target/tls.sh@106 -- # [[ false != \f\a\l\s\e ]] 00:41:52.581 13:04:11 -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:41:52.841 13:04:12 -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:41:52.841 13:04:12 -- target/tls.sh@113 -- # jq -r .enable_ktls 00:41:53.100 13:04:12 -- target/tls.sh@113 -- # ktls=true 00:41:53.100 13:04:12 -- target/tls.sh@114 -- # [[ true != \t\r\u\e ]] 00:41:53.100 13:04:12 -- target/tls.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:41:53.358 13:04:12 -- target/tls.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:41:53.358 13:04:12 -- target/tls.sh@121 -- # jq -r .enable_ktls 00:41:53.616 13:04:12 -- target/tls.sh@121 -- # ktls=false 00:41:53.616 13:04:12 -- target/tls.sh@122 -- # [[ false != \f\a\l\s\e ]] 00:41:53.617 13:04:12 -- target/tls.sh@127 -- # format_interchange_psk 00112233445566778899aabbccddeeff 00:41:53.617 13:04:12 -- target/tls.sh@49 -- # local key hash crc 00:41:53.617 13:04:12 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff 00:41:53.617 13:04:12 -- target/tls.sh@51 -- # hash=01 00:41:53.617 13:04:12 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff 00:41:53.617 13:04:12 -- target/tls.sh@52 -- # gzip -1 -c 00:41:53.617 13:04:12 -- target/tls.sh@52 -- # tail -c8 00:41:53.617 13:04:12 -- target/tls.sh@52 -- # head -c 4 00:41:53.617 13:04:12 -- target/tls.sh@52 -- # crc='p$H�' 00:41:53.617 13:04:12 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:41:53.617 13:04:12 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeffp$H�' 00:41:53.617 13:04:12 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:41:53.617 13:04:12 -- target/tls.sh@127 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:41:53.617 13:04:12 -- target/tls.sh@128 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 00:41:53.617 13:04:12 -- target/tls.sh@49 -- # local key hash crc 00:41:53.617 13:04:12 -- target/tls.sh@51 -- # key=ffeeddccbbaa99887766554433221100 00:41:53.617 13:04:12 -- target/tls.sh@51 -- # hash=01 00:41:53.617 13:04:12 -- target/tls.sh@52 -- # echo -n ffeeddccbbaa99887766554433221100 00:41:53.617 13:04:12 -- target/tls.sh@52 -- # gzip -1 -c 00:41:53.617 13:04:12 -- target/tls.sh@52 -- # tail -c8 00:41:53.617 13:04:12 -- target/tls.sh@52 -- # head -c 4 00:41:53.617 13:04:12 -- target/tls.sh@52 -- # crc=$'_\006o\330' 00:41:53.617 13:04:12 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:41:53.617 13:04:12 -- target/tls.sh@54 -- # echo -n $'ffeeddccbbaa99887766554433221100_\006o\330' 00:41:53.617 13:04:12 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:41:53.617 13:04:12 -- target/tls.sh@128 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:41:53.617 13:04:12 -- target/tls.sh@130 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:41:53.617 13:04:12 -- target/tls.sh@131 -- # key_2_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:41:53.617 13:04:12 -- target/tls.sh@133 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:41:53.617 13:04:12 -- target/tls.sh@134 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:41:53.617 13:04:12 -- target/tls.sh@136 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:41:53.617 13:04:12 -- target/tls.sh@137 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:41:53.617 13:04:12 -- target/tls.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:41:53.874 13:04:13 -- target/tls.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:41:54.133 13:04:13 -- target/tls.sh@142 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:41:54.133 13:04:13 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:41:54.133 13:04:13 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:41:54.391 [2024-07-22 13:04:13.654429] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:54.391 13:04:13 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:41:54.650 13:04:13 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:41:54.909 [2024-07-22 13:04:14.078532] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:41:54.909 [2024-07-22 13:04:14.078743] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:54.909 13:04:14 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:41:54.909 malloc0 00:41:55.167 13:04:14 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:41:55.167 13:04:14 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:41:55.426 13:04:14 -- target/tls.sh@146 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:42:07.655 Initializing NVMe Controllers 00:42:07.655 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:42:07.655 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:42:07.655 Initialization complete. Launching workers. 00:42:07.655 ======================================================== 00:42:07.655 Latency(us) 00:42:07.655 Device Information : IOPS MiB/s Average min max 00:42:07.655 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11313.39 44.19 5657.89 1675.71 16692.69 00:42:07.655 ======================================================== 00:42:07.655 Total : 11313.39 44.19 5657.89 1675.71 16692.69 00:42:07.655 00:42:07.655 13:04:24 -- target/tls.sh@152 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:42:07.655 13:04:24 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:42:07.655 13:04:24 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:42:07.655 13:04:24 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:42:07.655 13:04:24 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:42:07.655 13:04:24 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:42:07.655 13:04:24 -- target/tls.sh@28 -- # bdevperf_pid=88001 00:42:07.655 13:04:24 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:42:07.655 13:04:24 -- target/tls.sh@31 -- # waitforlisten 88001 /var/tmp/bdevperf.sock 00:42:07.655 13:04:24 -- common/autotest_common.sh@819 -- # '[' -z 88001 ']' 00:42:07.655 13:04:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:42:07.655 13:04:24 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:42:07.655 13:04:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:42:07.655 13:04:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:42:07.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:42:07.655 13:04:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:42:07.655 13:04:24 -- common/autotest_common.sh@10 -- # set +x 00:42:07.655 [2024-07-22 13:04:24.981679] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:42:07.655 [2024-07-22 13:04:24.981779] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88001 ] 00:42:07.655 [2024-07-22 13:04:25.122684] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:07.655 [2024-07-22 13:04:25.197922] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:42:07.655 13:04:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:42:07.655 13:04:25 -- common/autotest_common.sh@852 -- # return 0 00:42:07.655 13:04:25 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:42:07.655 [2024-07-22 13:04:26.192361] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:42:07.655 TLSTESTn1 00:42:07.655 13:04:26 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:42:07.655 Running I/O for 10 seconds... 00:42:17.652 00:42:17.652 Latency(us) 00:42:17.652 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:17.652 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:42:17.652 Verification LBA range: start 0x0 length 0x2000 00:42:17.652 TLSTESTn1 : 10.02 5575.52 21.78 0.00 0.00 22913.10 5064.15 22401.40 00:42:17.652 =================================================================================================================== 00:42:17.652 Total : 5575.52 21.78 0.00 0.00 22913.10 5064.15 22401.40 00:42:17.652 0 00:42:17.652 13:04:36 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:42:17.652 13:04:36 -- target/tls.sh@45 -- # killprocess 88001 00:42:17.652 13:04:36 -- common/autotest_common.sh@926 -- # '[' -z 88001 ']' 00:42:17.652 13:04:36 -- common/autotest_common.sh@930 -- # kill -0 88001 00:42:17.652 13:04:36 -- common/autotest_common.sh@931 -- # uname 00:42:17.652 13:04:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:42:17.652 13:04:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 88001 00:42:17.652 killing process with pid 88001 00:42:17.652 Received shutdown signal, test time was about 10.000000 seconds 00:42:17.652 00:42:17.652 Latency(us) 00:42:17.652 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:17.652 =================================================================================================================== 00:42:17.652 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:42:17.652 13:04:36 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:42:17.652 13:04:36 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:42:17.652 13:04:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 88001' 00:42:17.652 13:04:36 -- common/autotest_common.sh@945 -- # kill 88001 00:42:17.652 13:04:36 -- common/autotest_common.sh@950 -- # wait 88001 00:42:17.652 13:04:36 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:42:17.652 13:04:36 -- common/autotest_common.sh@640 -- # local es=0 00:42:17.652 13:04:36 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:42:17.652 13:04:36 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:42:17.652 13:04:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:42:17.652 13:04:36 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:42:17.652 13:04:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:42:17.652 13:04:36 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:42:17.652 13:04:36 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:42:17.652 13:04:36 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:42:17.652 13:04:36 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:42:17.652 13:04:36 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt' 00:42:17.652 13:04:36 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:42:17.652 13:04:36 -- target/tls.sh@28 -- # bdevperf_pid=88153 00:42:17.652 13:04:36 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:42:17.652 13:04:36 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:42:17.652 13:04:36 -- target/tls.sh@31 -- # waitforlisten 88153 /var/tmp/bdevperf.sock 00:42:17.652 13:04:36 -- common/autotest_common.sh@819 -- # '[' -z 88153 ']' 00:42:17.652 13:04:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:42:17.652 13:04:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:42:17.652 13:04:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:42:17.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:42:17.652 13:04:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:42:17.652 13:04:36 -- common/autotest_common.sh@10 -- # set +x 00:42:17.652 [2024-07-22 13:04:36.763885] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:42:17.652 [2024-07-22 13:04:36.764165] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88153 ] 00:42:17.652 [2024-07-22 13:04:36.904642] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:17.652 [2024-07-22 13:04:36.988249] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:42:18.587 13:04:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:42:18.587 13:04:37 -- common/autotest_common.sh@852 -- # return 0 00:42:18.587 13:04:37 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:42:18.846 [2024-07-22 13:04:38.022324] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:42:18.846 [2024-07-22 13:04:38.031178] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:42:18.846 [2024-07-22 13:04:38.032092] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c7db0 (107): Transport endpoint is not connected 00:42:18.846 [2024-07-22 13:04:38.033067] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c7db0 (9): Bad file descriptor 00:42:18.846 [2024-07-22 13:04:38.034063] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:18.847 [2024-07-22 13:04:38.034086] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:42:18.847 [2024-07-22 13:04:38.034096] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:18.847 2024/07/22 13:04:38 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:42:18.847 request: 00:42:18.847 { 00:42:18.847 "method": "bdev_nvme_attach_controller", 00:42:18.847 "params": { 00:42:18.847 "name": "TLSTEST", 00:42:18.847 "trtype": "tcp", 00:42:18.847 "traddr": "10.0.0.2", 00:42:18.847 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:18.847 "adrfam": "ipv4", 00:42:18.847 "trsvcid": "4420", 00:42:18.847 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:18.847 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt" 00:42:18.847 } 00:42:18.847 } 00:42:18.847 Got JSON-RPC error response 00:42:18.847 GoRPCClient: error on JSON-RPC call 00:42:18.847 13:04:38 -- target/tls.sh@36 -- # killprocess 88153 00:42:18.847 13:04:38 -- common/autotest_common.sh@926 -- # '[' -z 88153 ']' 00:42:18.847 13:04:38 -- common/autotest_common.sh@930 -- # kill -0 88153 00:42:18.847 13:04:38 -- common/autotest_common.sh@931 -- # uname 00:42:18.847 13:04:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:42:18.847 13:04:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 88153 00:42:18.847 killing process with pid 88153 00:42:18.847 Received shutdown signal, test time was about 10.000000 seconds 00:42:18.847 00:42:18.847 Latency(us) 00:42:18.847 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:18.847 =================================================================================================================== 00:42:18.847 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:42:18.847 13:04:38 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:42:18.847 13:04:38 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:42:18.847 13:04:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 88153' 00:42:18.847 13:04:38 -- common/autotest_common.sh@945 -- # kill 88153 00:42:18.847 13:04:38 -- common/autotest_common.sh@950 -- # wait 88153 00:42:19.105 13:04:38 -- target/tls.sh@37 -- # return 1 00:42:19.105 13:04:38 -- common/autotest_common.sh@643 -- # es=1 00:42:19.105 13:04:38 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:42:19.105 13:04:38 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:42:19.105 13:04:38 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:42:19.105 13:04:38 -- target/tls.sh@158 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:42:19.105 13:04:38 -- common/autotest_common.sh@640 -- # local es=0 00:42:19.105 13:04:38 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:42:19.105 13:04:38 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:42:19.105 13:04:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:42:19.105 13:04:38 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:42:19.105 13:04:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:42:19.105 13:04:38 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:42:19.105 13:04:38 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:42:19.105 13:04:38 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:42:19.105 13:04:38 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:42:19.105 13:04:38 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:42:19.105 13:04:38 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:42:19.105 13:04:38 -- target/tls.sh@28 -- # bdevperf_pid=88198 00:42:19.105 13:04:38 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:42:19.105 13:04:38 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:42:19.105 13:04:38 -- target/tls.sh@31 -- # waitforlisten 88198 /var/tmp/bdevperf.sock 00:42:19.105 13:04:38 -- common/autotest_common.sh@819 -- # '[' -z 88198 ']' 00:42:19.105 13:04:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:42:19.105 13:04:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:42:19.105 13:04:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:42:19.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:42:19.105 13:04:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:42:19.105 13:04:38 -- common/autotest_common.sh@10 -- # set +x 00:42:19.105 [2024-07-22 13:04:38.351870] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:42:19.105 [2024-07-22 13:04:38.351959] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88198 ] 00:42:19.105 [2024-07-22 13:04:38.491174] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:19.364 [2024-07-22 13:04:38.567128] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:42:19.930 13:04:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:42:19.930 13:04:39 -- common/autotest_common.sh@852 -- # return 0 00:42:19.930 13:04:39 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:42:20.188 [2024-07-22 13:04:39.555634] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:42:20.188 [2024-07-22 13:04:39.561354] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:42:20.188 [2024-07-22 13:04:39.561396] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:42:20.189 [2024-07-22 13:04:39.561450] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:42:20.189 [2024-07-22 13:04:39.562429] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2086db0 (107): Transport endpoint is not connected 00:42:20.189 [2024-07-22 13:04:39.563419] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2086db0 (9): Bad file descriptor 00:42:20.189 [2024-07-22 13:04:39.564415] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:20.189 [2024-07-22 13:04:39.564460] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:42:20.189 [2024-07-22 13:04:39.564486] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:20.189 2024/07/22 13:04:39 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host2 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:42:20.189 request: 00:42:20.189 { 00:42:20.189 "method": "bdev_nvme_attach_controller", 00:42:20.189 "params": { 00:42:20.189 "name": "TLSTEST", 00:42:20.189 "trtype": "tcp", 00:42:20.189 "traddr": "10.0.0.2", 00:42:20.189 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:42:20.189 "adrfam": "ipv4", 00:42:20.189 "trsvcid": "4420", 00:42:20.189 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:20.189 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt" 00:42:20.189 } 00:42:20.189 } 00:42:20.189 Got JSON-RPC error response 00:42:20.189 GoRPCClient: error on JSON-RPC call 00:42:20.189 13:04:39 -- target/tls.sh@36 -- # killprocess 88198 00:42:20.189 13:04:39 -- common/autotest_common.sh@926 -- # '[' -z 88198 ']' 00:42:20.189 13:04:39 -- common/autotest_common.sh@930 -- # kill -0 88198 00:42:20.189 13:04:39 -- common/autotest_common.sh@931 -- # uname 00:42:20.189 13:04:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:42:20.189 13:04:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 88198 00:42:20.447 killing process with pid 88198 00:42:20.447 Received shutdown signal, test time was about 10.000000 seconds 00:42:20.447 00:42:20.447 Latency(us) 00:42:20.447 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:20.447 =================================================================================================================== 00:42:20.447 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:42:20.447 13:04:39 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:42:20.447 13:04:39 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:42:20.447 13:04:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 88198' 00:42:20.447 13:04:39 -- common/autotest_common.sh@945 -- # kill 88198 00:42:20.447 13:04:39 -- common/autotest_common.sh@950 -- # wait 88198 00:42:20.447 13:04:39 -- target/tls.sh@37 -- # return 1 00:42:20.447 13:04:39 -- common/autotest_common.sh@643 -- # es=1 00:42:20.447 13:04:39 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:42:20.447 13:04:39 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:42:20.447 13:04:39 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:42:20.447 13:04:39 -- target/tls.sh@161 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:42:20.447 13:04:39 -- common/autotest_common.sh@640 -- # local es=0 00:42:20.447 13:04:39 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:42:20.447 13:04:39 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:42:20.447 13:04:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:42:20.447 13:04:39 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:42:20.447 13:04:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:42:20.447 13:04:39 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:42:20.447 13:04:39 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:42:20.447 13:04:39 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:42:20.447 13:04:39 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:42:20.447 13:04:39 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:42:20.447 13:04:39 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:42:20.447 13:04:39 -- target/tls.sh@28 -- # bdevperf_pid=88246 00:42:20.447 13:04:39 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:42:20.447 13:04:39 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:42:20.447 13:04:39 -- target/tls.sh@31 -- # waitforlisten 88246 /var/tmp/bdevperf.sock 00:42:20.447 13:04:39 -- common/autotest_common.sh@819 -- # '[' -z 88246 ']' 00:42:20.447 13:04:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:42:20.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:42:20.447 13:04:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:42:20.447 13:04:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:42:20.447 13:04:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:42:20.447 13:04:39 -- common/autotest_common.sh@10 -- # set +x 00:42:20.706 [2024-07-22 13:04:39.929724] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:42:20.706 [2024-07-22 13:04:39.929877] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88246 ] 00:42:20.706 [2024-07-22 13:04:40.084013] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:20.964 [2024-07-22 13:04:40.164188] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:42:21.531 13:04:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:42:21.531 13:04:40 -- common/autotest_common.sh@852 -- # return 0 00:42:21.531 13:04:40 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:42:21.790 [2024-07-22 13:04:41.094293] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:42:21.790 [2024-07-22 13:04:41.101737] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:42:21.790 [2024-07-22 13:04:41.101809] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:42:21.790 [2024-07-22 13:04:41.101864] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:42:21.790 [2024-07-22 13:04:41.101927] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bc6db0 (107): Transport endpoint is not connected 00:42:21.790 [2024-07-22 13:04:41.102921] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bc6db0 (9): Bad file descriptor 00:42:21.790 [2024-07-22 13:04:41.103918] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:42:21.790 [2024-07-22 13:04:41.103940] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:42:21.790 [2024-07-22 13:04:41.103951] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:42:21.790 2024/07/22 13:04:41 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:42:21.790 request: 00:42:21.790 { 00:42:21.790 "method": "bdev_nvme_attach_controller", 00:42:21.790 "params": { 00:42:21.790 "name": "TLSTEST", 00:42:21.790 "trtype": "tcp", 00:42:21.790 "traddr": "10.0.0.2", 00:42:21.790 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:21.790 "adrfam": "ipv4", 00:42:21.790 "trsvcid": "4420", 00:42:21.790 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:42:21.790 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt" 00:42:21.790 } 00:42:21.790 } 00:42:21.790 Got JSON-RPC error response 00:42:21.790 GoRPCClient: error on JSON-RPC call 00:42:21.790 13:04:41 -- target/tls.sh@36 -- # killprocess 88246 00:42:21.790 13:04:41 -- common/autotest_common.sh@926 -- # '[' -z 88246 ']' 00:42:21.790 13:04:41 -- common/autotest_common.sh@930 -- # kill -0 88246 00:42:21.790 13:04:41 -- common/autotest_common.sh@931 -- # uname 00:42:21.790 13:04:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:42:21.790 13:04:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 88246 00:42:21.790 killing process with pid 88246 00:42:21.790 Received shutdown signal, test time was about 10.000000 seconds 00:42:21.790 00:42:21.790 Latency(us) 00:42:21.790 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:21.790 =================================================================================================================== 00:42:21.790 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:42:21.790 13:04:41 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:42:21.790 13:04:41 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:42:21.790 13:04:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 88246' 00:42:21.790 13:04:41 -- common/autotest_common.sh@945 -- # kill 88246 00:42:21.790 13:04:41 -- common/autotest_common.sh@950 -- # wait 88246 00:42:22.049 13:04:41 -- target/tls.sh@37 -- # return 1 00:42:22.049 13:04:41 -- common/autotest_common.sh@643 -- # es=1 00:42:22.049 13:04:41 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:42:22.049 13:04:41 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:42:22.049 13:04:41 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:42:22.049 13:04:41 -- target/tls.sh@164 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:42:22.049 13:04:41 -- common/autotest_common.sh@640 -- # local es=0 00:42:22.049 13:04:41 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:42:22.049 13:04:41 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:42:22.049 13:04:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:42:22.050 13:04:41 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:42:22.050 13:04:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:42:22.050 13:04:41 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:42:22.050 13:04:41 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:42:22.050 13:04:41 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:42:22.050 13:04:41 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:42:22.050 13:04:41 -- target/tls.sh@23 -- # psk= 00:42:22.050 13:04:41 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:42:22.050 13:04:41 -- target/tls.sh@28 -- # bdevperf_pid=88286 00:42:22.050 13:04:41 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:42:22.050 13:04:41 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:42:22.050 13:04:41 -- target/tls.sh@31 -- # waitforlisten 88286 /var/tmp/bdevperf.sock 00:42:22.050 13:04:41 -- common/autotest_common.sh@819 -- # '[' -z 88286 ']' 00:42:22.050 13:04:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:42:22.050 13:04:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:42:22.050 13:04:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:42:22.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:42:22.050 13:04:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:42:22.050 13:04:41 -- common/autotest_common.sh@10 -- # set +x 00:42:22.050 [2024-07-22 13:04:41.412245] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:42:22.050 [2024-07-22 13:04:41.412342] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88286 ] 00:42:22.309 [2024-07-22 13:04:41.556016] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:22.309 [2024-07-22 13:04:41.637713] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:42:23.246 13:04:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:42:23.246 13:04:42 -- common/autotest_common.sh@852 -- # return 0 00:42:23.246 13:04:42 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:42:23.246 [2024-07-22 13:04:42.661888] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:42:23.246 [2024-07-22 13:04:42.663536] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc44f0 (9): Bad file descriptor 00:42:23.246 [2024-07-22 13:04:42.664516] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:42:23.246 [2024-07-22 13:04:42.664540] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:42:23.246 [2024-07-22 13:04:42.664551] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:23.505 2024/07/22 13:04:42 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:42:23.505 request: 00:42:23.505 { 00:42:23.505 "method": "bdev_nvme_attach_controller", 00:42:23.505 "params": { 00:42:23.505 "name": "TLSTEST", 00:42:23.505 "trtype": "tcp", 00:42:23.505 "traddr": "10.0.0.2", 00:42:23.505 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:23.505 "adrfam": "ipv4", 00:42:23.505 "trsvcid": "4420", 00:42:23.505 "subnqn": "nqn.2016-06.io.spdk:cnode1" 00:42:23.505 } 00:42:23.505 } 00:42:23.505 Got JSON-RPC error response 00:42:23.505 GoRPCClient: error on JSON-RPC call 00:42:23.505 13:04:42 -- target/tls.sh@36 -- # killprocess 88286 00:42:23.505 13:04:42 -- common/autotest_common.sh@926 -- # '[' -z 88286 ']' 00:42:23.505 13:04:42 -- common/autotest_common.sh@930 -- # kill -0 88286 00:42:23.505 13:04:42 -- common/autotest_common.sh@931 -- # uname 00:42:23.505 13:04:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:42:23.505 13:04:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 88286 00:42:23.505 killing process with pid 88286 00:42:23.505 Received shutdown signal, test time was about 10.000000 seconds 00:42:23.505 00:42:23.505 Latency(us) 00:42:23.505 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:23.505 =================================================================================================================== 00:42:23.505 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:42:23.505 13:04:42 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:42:23.505 13:04:42 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:42:23.505 13:04:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 88286' 00:42:23.505 13:04:42 -- common/autotest_common.sh@945 -- # kill 88286 00:42:23.505 13:04:42 -- common/autotest_common.sh@950 -- # wait 88286 00:42:23.505 13:04:42 -- target/tls.sh@37 -- # return 1 00:42:23.505 13:04:42 -- common/autotest_common.sh@643 -- # es=1 00:42:23.505 13:04:42 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:42:23.505 13:04:42 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:42:23.505 13:04:42 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:42:23.505 13:04:42 -- target/tls.sh@167 -- # killprocess 87635 00:42:23.505 13:04:42 -- common/autotest_common.sh@926 -- # '[' -z 87635 ']' 00:42:23.505 13:04:42 -- common/autotest_common.sh@930 -- # kill -0 87635 00:42:23.505 13:04:42 -- common/autotest_common.sh@931 -- # uname 00:42:23.505 13:04:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:42:23.505 13:04:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 87635 00:42:23.775 killing process with pid 87635 00:42:23.775 13:04:42 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:42:23.775 13:04:42 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:42:23.775 13:04:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 87635' 00:42:23.775 13:04:42 -- common/autotest_common.sh@945 -- # kill 87635 00:42:23.775 13:04:42 -- common/autotest_common.sh@950 -- # wait 87635 00:42:23.775 13:04:43 -- target/tls.sh@168 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 02 00:42:23.775 13:04:43 -- target/tls.sh@49 -- # local key hash crc 00:42:23.775 13:04:43 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:42:23.775 13:04:43 -- target/tls.sh@51 -- # hash=02 00:42:23.775 13:04:43 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff0011223344556677 00:42:23.775 13:04:43 -- target/tls.sh@52 -- # gzip -1 -c 00:42:23.775 13:04:43 -- target/tls.sh@52 -- # head -c 4 00:42:23.775 13:04:43 -- target/tls.sh@52 -- # tail -c8 00:42:23.775 13:04:43 -- target/tls.sh@52 -- # crc='�e�'\''' 00:42:23.775 13:04:43 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:42:23.775 13:04:43 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeff0011223344556677�e�'\''' 00:42:23.775 13:04:43 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:42:23.775 13:04:43 -- target/tls.sh@168 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:42:23.775 13:04:43 -- target/tls.sh@169 -- # key_long_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:42:23.775 13:04:43 -- target/tls.sh@170 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:42:23.775 13:04:43 -- target/tls.sh@171 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:42:23.775 13:04:43 -- target/tls.sh@172 -- # nvmfappstart -m 0x2 00:42:23.775 13:04:43 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:42:23.775 13:04:43 -- common/autotest_common.sh@712 -- # xtrace_disable 00:42:23.775 13:04:43 -- common/autotest_common.sh@10 -- # set +x 00:42:23.775 13:04:43 -- nvmf/common.sh@469 -- # nvmfpid=88352 00:42:23.775 13:04:43 -- nvmf/common.sh@470 -- # waitforlisten 88352 00:42:23.776 13:04:43 -- common/autotest_common.sh@819 -- # '[' -z 88352 ']' 00:42:23.776 13:04:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:23.776 13:04:43 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:42:23.776 13:04:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:42:23.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:23.776 13:04:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:23.776 13:04:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:42:23.776 13:04:43 -- common/autotest_common.sh@10 -- # set +x 00:42:24.034 [2024-07-22 13:04:43.244320] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:42:24.034 [2024-07-22 13:04:43.244408] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:24.034 [2024-07-22 13:04:43.383587] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:24.293 [2024-07-22 13:04:43.462188] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:42:24.293 [2024-07-22 13:04:43.462319] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:24.293 [2024-07-22 13:04:43.462331] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:24.293 [2024-07-22 13:04:43.462341] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:24.293 [2024-07-22 13:04:43.462371] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:42:24.861 13:04:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:42:24.861 13:04:44 -- common/autotest_common.sh@852 -- # return 0 00:42:24.861 13:04:44 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:42:24.861 13:04:44 -- common/autotest_common.sh@718 -- # xtrace_disable 00:42:24.861 13:04:44 -- common/autotest_common.sh@10 -- # set +x 00:42:25.218 13:04:44 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:25.218 13:04:44 -- target/tls.sh@174 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:42:25.218 13:04:44 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:42:25.218 13:04:44 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:42:25.218 [2024-07-22 13:04:44.488004] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:25.218 13:04:44 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:42:25.477 13:04:44 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:42:25.736 [2024-07-22 13:04:44.968145] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:42:25.736 [2024-07-22 13:04:44.968364] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:25.736 13:04:44 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:42:25.994 malloc0 00:42:25.994 13:04:45 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:42:26.253 13:04:45 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:42:26.512 13:04:45 -- target/tls.sh@176 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:42:26.512 13:04:45 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:42:26.512 13:04:45 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:42:26.512 13:04:45 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:42:26.512 13:04:45 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:42:26.512 13:04:45 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:42:26.512 13:04:45 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:42:26.512 13:04:45 -- target/tls.sh@28 -- # bdevperf_pid=88450 00:42:26.512 13:04:45 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:42:26.512 13:04:45 -- target/tls.sh@31 -- # waitforlisten 88450 /var/tmp/bdevperf.sock 00:42:26.512 13:04:45 -- common/autotest_common.sh@819 -- # '[' -z 88450 ']' 00:42:26.512 13:04:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:42:26.512 13:04:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:42:26.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:42:26.512 13:04:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:42:26.512 13:04:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:42:26.512 13:04:45 -- common/autotest_common.sh@10 -- # set +x 00:42:26.512 [2024-07-22 13:04:45.801769] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:42:26.512 [2024-07-22 13:04:45.801851] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88450 ] 00:42:26.772 [2024-07-22 13:04:45.938095] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:26.772 [2024-07-22 13:04:46.012855] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:42:27.353 13:04:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:42:27.353 13:04:46 -- common/autotest_common.sh@852 -- # return 0 00:42:27.353 13:04:46 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:42:27.613 [2024-07-22 13:04:46.989695] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:42:27.871 TLSTESTn1 00:42:27.871 13:04:47 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:42:27.871 Running I/O for 10 seconds... 00:42:37.842 00:42:37.842 Latency(us) 00:42:37.842 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:37.842 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:42:37.842 Verification LBA range: start 0x0 length 0x2000 00:42:37.842 TLSTESTn1 : 10.02 5553.98 21.70 0.00 0.00 23002.76 6791.91 20614.05 00:42:37.842 =================================================================================================================== 00:42:37.842 Total : 5553.98 21.70 0.00 0.00 23002.76 6791.91 20614.05 00:42:37.842 0 00:42:37.842 13:04:57 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:42:37.842 13:04:57 -- target/tls.sh@45 -- # killprocess 88450 00:42:37.843 13:04:57 -- common/autotest_common.sh@926 -- # '[' -z 88450 ']' 00:42:37.843 13:04:57 -- common/autotest_common.sh@930 -- # kill -0 88450 00:42:37.843 13:04:57 -- common/autotest_common.sh@931 -- # uname 00:42:37.843 13:04:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:42:37.843 13:04:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 88450 00:42:37.843 13:04:57 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:42:37.843 13:04:57 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:42:37.843 killing process with pid 88450 00:42:37.843 13:04:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 88450' 00:42:37.843 Received shutdown signal, test time was about 10.000000 seconds 00:42:37.843 00:42:37.843 Latency(us) 00:42:37.843 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:37.843 =================================================================================================================== 00:42:37.843 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:42:37.843 13:04:57 -- common/autotest_common.sh@945 -- # kill 88450 00:42:37.843 13:04:57 -- common/autotest_common.sh@950 -- # wait 88450 00:42:38.102 13:04:57 -- target/tls.sh@179 -- # chmod 0666 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:42:38.102 13:04:57 -- target/tls.sh@180 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:42:38.102 13:04:57 -- common/autotest_common.sh@640 -- # local es=0 00:42:38.102 13:04:57 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:42:38.102 13:04:57 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:42:38.102 13:04:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:42:38.102 13:04:57 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:42:38.102 13:04:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:42:38.102 13:04:57 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:42:38.102 13:04:57 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:42:38.102 13:04:57 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:42:38.102 13:04:57 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:42:38.102 13:04:57 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:42:38.102 13:04:57 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:42:38.102 13:04:57 -- target/tls.sh@28 -- # bdevperf_pid=88603 00:42:38.102 13:04:57 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:42:38.102 13:04:57 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:42:38.102 13:04:57 -- target/tls.sh@31 -- # waitforlisten 88603 /var/tmp/bdevperf.sock 00:42:38.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:42:38.102 13:04:57 -- common/autotest_common.sh@819 -- # '[' -z 88603 ']' 00:42:38.102 13:04:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:42:38.102 13:04:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:42:38.102 13:04:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:42:38.102 13:04:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:42:38.102 13:04:57 -- common/autotest_common.sh@10 -- # set +x 00:42:38.361 [2024-07-22 13:04:57.525736] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:42:38.361 [2024-07-22 13:04:57.525844] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88603 ] 00:42:38.361 [2024-07-22 13:04:57.663095] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:38.361 [2024-07-22 13:04:57.751211] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:42:39.296 13:04:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:42:39.296 13:04:58 -- common/autotest_common.sh@852 -- # return 0 00:42:39.296 13:04:58 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:42:39.296 [2024-07-22 13:04:58.683867] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:42:39.296 [2024-07-22 13:04:58.683914] bdev_nvme_rpc.c: 336:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:42:39.296 2024/07/22 13:04:58 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-22 Msg=Could not retrieve PSK from file: /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:42:39.296 request: 00:42:39.296 { 00:42:39.296 "method": "bdev_nvme_attach_controller", 00:42:39.296 "params": { 00:42:39.296 "name": "TLSTEST", 00:42:39.296 "trtype": "tcp", 00:42:39.296 "traddr": "10.0.0.2", 00:42:39.296 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:39.296 "adrfam": "ipv4", 00:42:39.296 "trsvcid": "4420", 00:42:39.296 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:39.296 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:42:39.296 } 00:42:39.296 } 00:42:39.296 Got JSON-RPC error response 00:42:39.296 GoRPCClient: error on JSON-RPC call 00:42:39.296 13:04:58 -- target/tls.sh@36 -- # killprocess 88603 00:42:39.296 13:04:58 -- common/autotest_common.sh@926 -- # '[' -z 88603 ']' 00:42:39.296 13:04:58 -- common/autotest_common.sh@930 -- # kill -0 88603 00:42:39.296 13:04:58 -- common/autotest_common.sh@931 -- # uname 00:42:39.296 13:04:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:42:39.296 13:04:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 88603 00:42:39.555 13:04:58 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:42:39.555 13:04:58 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:42:39.555 killing process with pid 88603 00:42:39.555 13:04:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 88603' 00:42:39.555 Received shutdown signal, test time was about 10.000000 seconds 00:42:39.555 00:42:39.555 Latency(us) 00:42:39.555 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:39.555 =================================================================================================================== 00:42:39.555 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:42:39.555 13:04:58 -- common/autotest_common.sh@945 -- # kill 88603 00:42:39.555 13:04:58 -- common/autotest_common.sh@950 -- # wait 88603 00:42:39.555 13:04:58 -- target/tls.sh@37 -- # return 1 00:42:39.555 13:04:58 -- common/autotest_common.sh@643 -- # es=1 00:42:39.555 13:04:58 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:42:39.555 13:04:58 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:42:39.555 13:04:58 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:42:39.555 13:04:58 -- target/tls.sh@183 -- # killprocess 88352 00:42:39.555 13:04:58 -- common/autotest_common.sh@926 -- # '[' -z 88352 ']' 00:42:39.555 13:04:58 -- common/autotest_common.sh@930 -- # kill -0 88352 00:42:39.555 13:04:58 -- common/autotest_common.sh@931 -- # uname 00:42:39.555 13:04:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:42:39.555 13:04:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 88352 00:42:39.555 13:04:58 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:42:39.555 killing process with pid 88352 00:42:39.555 13:04:58 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:42:39.555 13:04:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 88352' 00:42:39.555 13:04:58 -- common/autotest_common.sh@945 -- # kill 88352 00:42:39.555 13:04:58 -- common/autotest_common.sh@950 -- # wait 88352 00:42:39.814 13:04:59 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:42:39.814 13:04:59 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:42:39.814 13:04:59 -- common/autotest_common.sh@712 -- # xtrace_disable 00:42:39.814 13:04:59 -- common/autotest_common.sh@10 -- # set +x 00:42:39.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:39.814 13:04:59 -- nvmf/common.sh@469 -- # nvmfpid=88659 00:42:39.814 13:04:59 -- nvmf/common.sh@470 -- # waitforlisten 88659 00:42:39.814 13:04:59 -- common/autotest_common.sh@819 -- # '[' -z 88659 ']' 00:42:39.814 13:04:59 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:42:39.814 13:04:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:39.814 13:04:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:42:39.814 13:04:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:39.814 13:04:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:42:39.814 13:04:59 -- common/autotest_common.sh@10 -- # set +x 00:42:39.814 [2024-07-22 13:04:59.201900] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:42:39.814 [2024-07-22 13:04:59.201965] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:40.073 [2024-07-22 13:04:59.334889] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:40.073 [2024-07-22 13:04:59.400478] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:42:40.073 [2024-07-22 13:04:59.400703] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:40.073 [2024-07-22 13:04:59.400718] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:40.073 [2024-07-22 13:04:59.400727] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:40.073 [2024-07-22 13:04:59.400752] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:42:41.011 13:05:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:42:41.011 13:05:00 -- common/autotest_common.sh@852 -- # return 0 00:42:41.011 13:05:00 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:42:41.011 13:05:00 -- common/autotest_common.sh@718 -- # xtrace_disable 00:42:41.011 13:05:00 -- common/autotest_common.sh@10 -- # set +x 00:42:41.011 13:05:00 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:41.011 13:05:00 -- target/tls.sh@186 -- # NOT setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:42:41.011 13:05:00 -- common/autotest_common.sh@640 -- # local es=0 00:42:41.011 13:05:00 -- common/autotest_common.sh@642 -- # valid_exec_arg setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:42:41.011 13:05:00 -- common/autotest_common.sh@628 -- # local arg=setup_nvmf_tgt 00:42:41.011 13:05:00 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:42:41.011 13:05:00 -- common/autotest_common.sh@632 -- # type -t setup_nvmf_tgt 00:42:41.011 13:05:00 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:42:41.011 13:05:00 -- common/autotest_common.sh@643 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:42:41.011 13:05:00 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:42:41.011 13:05:00 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:42:41.271 [2024-07-22 13:05:00.460088] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:41.271 13:05:00 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:42:41.530 13:05:00 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:42:41.789 [2024-07-22 13:05:00.956461] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:42:41.789 [2024-07-22 13:05:00.956779] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:41.789 13:05:00 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:42:41.789 malloc0 00:42:42.048 13:05:01 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:42:42.048 13:05:01 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:42:42.307 [2024-07-22 13:05:01.669375] tcp.c:3549:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:42:42.307 [2024-07-22 13:05:01.669423] tcp.c:3618:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:42:42.307 [2024-07-22 13:05:01.669444] subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:42:42.307 2024/07/22 13:05:01 error on JSON-RPC call, method: nvmf_subsystem_add_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt], err: error received for nvmf_subsystem_add_host method, err: Code=-32603 Msg=Internal error 00:42:42.307 request: 00:42:42.307 { 00:42:42.307 "method": "nvmf_subsystem_add_host", 00:42:42.307 "params": { 00:42:42.307 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:42:42.307 "host": "nqn.2016-06.io.spdk:host1", 00:42:42.307 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:42:42.307 } 00:42:42.307 } 00:42:42.307 Got JSON-RPC error response 00:42:42.307 GoRPCClient: error on JSON-RPC call 00:42:42.307 13:05:01 -- common/autotest_common.sh@643 -- # es=1 00:42:42.307 13:05:01 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:42:42.307 13:05:01 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:42:42.307 13:05:01 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:42:42.307 13:05:01 -- target/tls.sh@189 -- # killprocess 88659 00:42:42.307 13:05:01 -- common/autotest_common.sh@926 -- # '[' -z 88659 ']' 00:42:42.307 13:05:01 -- common/autotest_common.sh@930 -- # kill -0 88659 00:42:42.307 13:05:01 -- common/autotest_common.sh@931 -- # uname 00:42:42.307 13:05:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:42:42.307 13:05:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 88659 00:42:42.307 13:05:01 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:42:42.307 killing process with pid 88659 00:42:42.307 13:05:01 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:42:42.307 13:05:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 88659' 00:42:42.307 13:05:01 -- common/autotest_common.sh@945 -- # kill 88659 00:42:42.307 13:05:01 -- common/autotest_common.sh@950 -- # wait 88659 00:42:42.567 13:05:01 -- target/tls.sh@190 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:42:42.567 13:05:01 -- target/tls.sh@193 -- # nvmfappstart -m 0x2 00:42:42.567 13:05:01 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:42:42.567 13:05:01 -- common/autotest_common.sh@712 -- # xtrace_disable 00:42:42.567 13:05:01 -- common/autotest_common.sh@10 -- # set +x 00:42:42.567 13:05:01 -- nvmf/common.sh@469 -- # nvmfpid=88764 00:42:42.567 13:05:01 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:42:42.567 13:05:01 -- nvmf/common.sh@470 -- # waitforlisten 88764 00:42:42.568 13:05:01 -- common/autotest_common.sh@819 -- # '[' -z 88764 ']' 00:42:42.568 13:05:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:42.568 13:05:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:42:42.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:42.568 13:05:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:42.568 13:05:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:42:42.568 13:05:01 -- common/autotest_common.sh@10 -- # set +x 00:42:42.826 [2024-07-22 13:05:02.009403] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:42:42.826 [2024-07-22 13:05:02.010011] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:42.826 [2024-07-22 13:05:02.143099] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:42.826 [2024-07-22 13:05:02.216376] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:42:42.826 [2024-07-22 13:05:02.216510] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:42.826 [2024-07-22 13:05:02.216538] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:42.827 [2024-07-22 13:05:02.216562] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:42.827 [2024-07-22 13:05:02.216584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:42:43.801 13:05:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:42:43.801 13:05:02 -- common/autotest_common.sh@852 -- # return 0 00:42:43.801 13:05:02 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:42:43.801 13:05:02 -- common/autotest_common.sh@718 -- # xtrace_disable 00:42:43.801 13:05:02 -- common/autotest_common.sh@10 -- # set +x 00:42:43.801 13:05:02 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:43.801 13:05:02 -- target/tls.sh@194 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:42:43.801 13:05:02 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:42:43.801 13:05:02 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:42:43.801 [2024-07-22 13:05:03.150259] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:43.801 13:05:03 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:42:44.059 13:05:03 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:42:44.318 [2024-07-22 13:05:03.610361] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:42:44.318 [2024-07-22 13:05:03.610592] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:44.318 13:05:03 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:42:44.577 malloc0 00:42:44.577 13:05:03 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:42:44.835 13:05:04 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:42:45.094 13:05:04 -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:42:45.094 13:05:04 -- target/tls.sh@197 -- # bdevperf_pid=88867 00:42:45.094 13:05:04 -- target/tls.sh@199 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:42:45.094 13:05:04 -- target/tls.sh@200 -- # waitforlisten 88867 /var/tmp/bdevperf.sock 00:42:45.094 13:05:04 -- common/autotest_common.sh@819 -- # '[' -z 88867 ']' 00:42:45.094 13:05:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:42:45.094 13:05:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:42:45.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:42:45.094 13:05:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:42:45.094 13:05:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:42:45.094 13:05:04 -- common/autotest_common.sh@10 -- # set +x 00:42:45.094 [2024-07-22 13:05:04.337591] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:42:45.094 [2024-07-22 13:05:04.337680] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88867 ] 00:42:45.094 [2024-07-22 13:05:04.476096] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:45.353 [2024-07-22 13:05:04.540687] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:42:45.919 13:05:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:42:45.919 13:05:05 -- common/autotest_common.sh@852 -- # return 0 00:42:45.919 13:05:05 -- target/tls.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:42:46.178 [2024-07-22 13:05:05.522856] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:42:46.178 TLSTESTn1 00:42:46.437 13:05:05 -- target/tls.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:42:46.695 13:05:05 -- target/tls.sh@205 -- # tgtconf='{ 00:42:46.695 "subsystems": [ 00:42:46.695 { 00:42:46.695 "subsystem": "iobuf", 00:42:46.695 "config": [ 00:42:46.695 { 00:42:46.695 "method": "iobuf_set_options", 00:42:46.695 "params": { 00:42:46.695 "large_bufsize": 135168, 00:42:46.695 "large_pool_count": 1024, 00:42:46.695 "small_bufsize": 8192, 00:42:46.695 "small_pool_count": 8192 00:42:46.695 } 00:42:46.695 } 00:42:46.695 ] 00:42:46.695 }, 00:42:46.695 { 00:42:46.695 "subsystem": "sock", 00:42:46.695 "config": [ 00:42:46.695 { 00:42:46.695 "method": "sock_impl_set_options", 00:42:46.695 "params": { 00:42:46.695 "enable_ktls": false, 00:42:46.695 "enable_placement_id": 0, 00:42:46.695 "enable_quickack": false, 00:42:46.695 "enable_recv_pipe": true, 00:42:46.695 "enable_zerocopy_send_client": false, 00:42:46.695 "enable_zerocopy_send_server": true, 00:42:46.695 "impl_name": "posix", 00:42:46.695 "recv_buf_size": 2097152, 00:42:46.695 "send_buf_size": 2097152, 00:42:46.695 "tls_version": 0, 00:42:46.695 "zerocopy_threshold": 0 00:42:46.695 } 00:42:46.695 }, 00:42:46.695 { 00:42:46.695 "method": "sock_impl_set_options", 00:42:46.695 "params": { 00:42:46.695 "enable_ktls": false, 00:42:46.695 "enable_placement_id": 0, 00:42:46.695 "enable_quickack": false, 00:42:46.695 "enable_recv_pipe": true, 00:42:46.695 "enable_zerocopy_send_client": false, 00:42:46.695 "enable_zerocopy_send_server": true, 00:42:46.695 "impl_name": "ssl", 00:42:46.695 "recv_buf_size": 4096, 00:42:46.695 "send_buf_size": 4096, 00:42:46.695 "tls_version": 0, 00:42:46.696 "zerocopy_threshold": 0 00:42:46.696 } 00:42:46.696 } 00:42:46.696 ] 00:42:46.696 }, 00:42:46.696 { 00:42:46.696 "subsystem": "vmd", 00:42:46.696 "config": [] 00:42:46.696 }, 00:42:46.696 { 00:42:46.696 "subsystem": "accel", 00:42:46.696 "config": [ 00:42:46.696 { 00:42:46.696 "method": "accel_set_options", 00:42:46.696 "params": { 00:42:46.696 "buf_count": 2048, 00:42:46.696 "large_cache_size": 16, 00:42:46.696 "sequence_count": 2048, 00:42:46.696 "small_cache_size": 128, 00:42:46.696 "task_count": 2048 00:42:46.696 } 00:42:46.696 } 00:42:46.696 ] 00:42:46.696 }, 00:42:46.696 { 00:42:46.696 "subsystem": "bdev", 00:42:46.696 "config": [ 00:42:46.696 { 00:42:46.696 "method": "bdev_set_options", 00:42:46.696 "params": { 00:42:46.696 "bdev_auto_examine": true, 00:42:46.696 "bdev_io_cache_size": 256, 00:42:46.696 "bdev_io_pool_size": 65535, 00:42:46.696 "iobuf_large_cache_size": 16, 00:42:46.696 "iobuf_small_cache_size": 128 00:42:46.696 } 00:42:46.696 }, 00:42:46.696 { 00:42:46.696 "method": "bdev_raid_set_options", 00:42:46.696 "params": { 00:42:46.696 "process_window_size_kb": 1024 00:42:46.696 } 00:42:46.696 }, 00:42:46.696 { 00:42:46.696 "method": "bdev_iscsi_set_options", 00:42:46.696 "params": { 00:42:46.696 "timeout_sec": 30 00:42:46.696 } 00:42:46.696 }, 00:42:46.696 { 00:42:46.696 "method": "bdev_nvme_set_options", 00:42:46.696 "params": { 00:42:46.696 "action_on_timeout": "none", 00:42:46.696 "allow_accel_sequence": false, 00:42:46.696 "arbitration_burst": 0, 00:42:46.696 "bdev_retry_count": 3, 00:42:46.696 "ctrlr_loss_timeout_sec": 0, 00:42:46.696 "delay_cmd_submit": true, 00:42:46.696 "fast_io_fail_timeout_sec": 0, 00:42:46.696 "generate_uuids": false, 00:42:46.696 "high_priority_weight": 0, 00:42:46.696 "io_path_stat": false, 00:42:46.696 "io_queue_requests": 0, 00:42:46.696 "keep_alive_timeout_ms": 10000, 00:42:46.696 "low_priority_weight": 0, 00:42:46.696 "medium_priority_weight": 0, 00:42:46.696 "nvme_adminq_poll_period_us": 10000, 00:42:46.696 "nvme_ioq_poll_period_us": 0, 00:42:46.696 "reconnect_delay_sec": 0, 00:42:46.696 "timeout_admin_us": 0, 00:42:46.696 "timeout_us": 0, 00:42:46.696 "transport_ack_timeout": 0, 00:42:46.696 "transport_retry_count": 4, 00:42:46.696 "transport_tos": 0 00:42:46.696 } 00:42:46.696 }, 00:42:46.696 { 00:42:46.696 "method": "bdev_nvme_set_hotplug", 00:42:46.696 "params": { 00:42:46.696 "enable": false, 00:42:46.696 "period_us": 100000 00:42:46.696 } 00:42:46.696 }, 00:42:46.696 { 00:42:46.696 "method": "bdev_malloc_create", 00:42:46.696 "params": { 00:42:46.696 "block_size": 4096, 00:42:46.696 "name": "malloc0", 00:42:46.696 "num_blocks": 8192, 00:42:46.696 "optimal_io_boundary": 0, 00:42:46.696 "physical_block_size": 4096, 00:42:46.696 "uuid": "04d11f30-a5a7-4555-ba39-2c4cb1aedd69" 00:42:46.696 } 00:42:46.696 }, 00:42:46.696 { 00:42:46.696 "method": "bdev_wait_for_examine" 00:42:46.696 } 00:42:46.696 ] 00:42:46.696 }, 00:42:46.696 { 00:42:46.696 "subsystem": "nbd", 00:42:46.696 "config": [] 00:42:46.696 }, 00:42:46.696 { 00:42:46.696 "subsystem": "scheduler", 00:42:46.696 "config": [ 00:42:46.696 { 00:42:46.696 "method": "framework_set_scheduler", 00:42:46.696 "params": { 00:42:46.696 "name": "static" 00:42:46.696 } 00:42:46.696 } 00:42:46.696 ] 00:42:46.696 }, 00:42:46.696 { 00:42:46.696 "subsystem": "nvmf", 00:42:46.696 "config": [ 00:42:46.696 { 00:42:46.696 "method": "nvmf_set_config", 00:42:46.696 "params": { 00:42:46.696 "admin_cmd_passthru": { 00:42:46.696 "identify_ctrlr": false 00:42:46.696 }, 00:42:46.696 "discovery_filter": "match_any" 00:42:46.696 } 00:42:46.696 }, 00:42:46.696 { 00:42:46.696 "method": "nvmf_set_max_subsystems", 00:42:46.696 "params": { 00:42:46.696 "max_subsystems": 1024 00:42:46.696 } 00:42:46.696 }, 00:42:46.696 { 00:42:46.696 "method": "nvmf_set_crdt", 00:42:46.696 "params": { 00:42:46.696 "crdt1": 0, 00:42:46.696 "crdt2": 0, 00:42:46.696 "crdt3": 0 00:42:46.696 } 00:42:46.696 }, 00:42:46.696 { 00:42:46.696 "method": "nvmf_create_transport", 00:42:46.696 "params": { 00:42:46.696 "abort_timeout_sec": 1, 00:42:46.696 "buf_cache_size": 4294967295, 00:42:46.696 "c2h_success": false, 00:42:46.696 "dif_insert_or_strip": false, 00:42:46.696 "in_capsule_data_size": 4096, 00:42:46.696 "io_unit_size": 131072, 00:42:46.696 "max_aq_depth": 128, 00:42:46.696 "max_io_qpairs_per_ctrlr": 127, 00:42:46.696 "max_io_size": 131072, 00:42:46.696 "max_queue_depth": 128, 00:42:46.696 "num_shared_buffers": 511, 00:42:46.696 "sock_priority": 0, 00:42:46.696 "trtype": "TCP", 00:42:46.696 "zcopy": false 00:42:46.696 } 00:42:46.696 }, 00:42:46.696 { 00:42:46.696 "method": "nvmf_create_subsystem", 00:42:46.696 "params": { 00:42:46.696 "allow_any_host": false, 00:42:46.696 "ana_reporting": false, 00:42:46.696 "max_cntlid": 65519, 00:42:46.696 "max_namespaces": 10, 00:42:46.696 "min_cntlid": 1, 00:42:46.696 "model_number": "SPDK bdev Controller", 00:42:46.696 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:42:46.696 "serial_number": "SPDK00000000000001" 00:42:46.696 } 00:42:46.696 }, 00:42:46.696 { 00:42:46.696 "method": "nvmf_subsystem_add_host", 00:42:46.696 "params": { 00:42:46.696 "host": "nqn.2016-06.io.spdk:host1", 00:42:46.696 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:42:46.696 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:42:46.696 } 00:42:46.696 }, 00:42:46.696 { 00:42:46.696 "method": "nvmf_subsystem_add_ns", 00:42:46.696 "params": { 00:42:46.696 "namespace": { 00:42:46.696 "bdev_name": "malloc0", 00:42:46.696 "nguid": "04D11F30A5A74555BA392C4CB1AEDD69", 00:42:46.696 "nsid": 1, 00:42:46.696 "uuid": "04d11f30-a5a7-4555-ba39-2c4cb1aedd69" 00:42:46.696 }, 00:42:46.696 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:42:46.696 } 00:42:46.696 }, 00:42:46.696 { 00:42:46.696 "method": "nvmf_subsystem_add_listener", 00:42:46.696 "params": { 00:42:46.696 "listen_address": { 00:42:46.696 "adrfam": "IPv4", 00:42:46.696 "traddr": "10.0.0.2", 00:42:46.696 "trsvcid": "4420", 00:42:46.696 "trtype": "TCP" 00:42:46.696 }, 00:42:46.696 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:42:46.696 "secure_channel": true 00:42:46.696 } 00:42:46.696 } 00:42:46.696 ] 00:42:46.696 } 00:42:46.696 ] 00:42:46.696 }' 00:42:46.696 13:05:05 -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:42:46.956 13:05:06 -- target/tls.sh@206 -- # bdevperfconf='{ 00:42:46.956 "subsystems": [ 00:42:46.956 { 00:42:46.956 "subsystem": "iobuf", 00:42:46.956 "config": [ 00:42:46.956 { 00:42:46.956 "method": "iobuf_set_options", 00:42:46.956 "params": { 00:42:46.956 "large_bufsize": 135168, 00:42:46.956 "large_pool_count": 1024, 00:42:46.956 "small_bufsize": 8192, 00:42:46.956 "small_pool_count": 8192 00:42:46.956 } 00:42:46.956 } 00:42:46.956 ] 00:42:46.956 }, 00:42:46.956 { 00:42:46.956 "subsystem": "sock", 00:42:46.956 "config": [ 00:42:46.956 { 00:42:46.956 "method": "sock_impl_set_options", 00:42:46.956 "params": { 00:42:46.956 "enable_ktls": false, 00:42:46.956 "enable_placement_id": 0, 00:42:46.956 "enable_quickack": false, 00:42:46.956 "enable_recv_pipe": true, 00:42:46.956 "enable_zerocopy_send_client": false, 00:42:46.956 "enable_zerocopy_send_server": true, 00:42:46.956 "impl_name": "posix", 00:42:46.956 "recv_buf_size": 2097152, 00:42:46.956 "send_buf_size": 2097152, 00:42:46.956 "tls_version": 0, 00:42:46.956 "zerocopy_threshold": 0 00:42:46.956 } 00:42:46.956 }, 00:42:46.956 { 00:42:46.956 "method": "sock_impl_set_options", 00:42:46.956 "params": { 00:42:46.956 "enable_ktls": false, 00:42:46.956 "enable_placement_id": 0, 00:42:46.956 "enable_quickack": false, 00:42:46.956 "enable_recv_pipe": true, 00:42:46.956 "enable_zerocopy_send_client": false, 00:42:46.956 "enable_zerocopy_send_server": true, 00:42:46.956 "impl_name": "ssl", 00:42:46.956 "recv_buf_size": 4096, 00:42:46.956 "send_buf_size": 4096, 00:42:46.956 "tls_version": 0, 00:42:46.956 "zerocopy_threshold": 0 00:42:46.956 } 00:42:46.956 } 00:42:46.956 ] 00:42:46.956 }, 00:42:46.956 { 00:42:46.956 "subsystem": "vmd", 00:42:46.956 "config": [] 00:42:46.956 }, 00:42:46.956 { 00:42:46.956 "subsystem": "accel", 00:42:46.956 "config": [ 00:42:46.956 { 00:42:46.956 "method": "accel_set_options", 00:42:46.956 "params": { 00:42:46.956 "buf_count": 2048, 00:42:46.956 "large_cache_size": 16, 00:42:46.956 "sequence_count": 2048, 00:42:46.956 "small_cache_size": 128, 00:42:46.956 "task_count": 2048 00:42:46.956 } 00:42:46.956 } 00:42:46.956 ] 00:42:46.956 }, 00:42:46.956 { 00:42:46.956 "subsystem": "bdev", 00:42:46.956 "config": [ 00:42:46.956 { 00:42:46.956 "method": "bdev_set_options", 00:42:46.956 "params": { 00:42:46.956 "bdev_auto_examine": true, 00:42:46.956 "bdev_io_cache_size": 256, 00:42:46.956 "bdev_io_pool_size": 65535, 00:42:46.956 "iobuf_large_cache_size": 16, 00:42:46.956 "iobuf_small_cache_size": 128 00:42:46.956 } 00:42:46.956 }, 00:42:46.956 { 00:42:46.956 "method": "bdev_raid_set_options", 00:42:46.956 "params": { 00:42:46.956 "process_window_size_kb": 1024 00:42:46.956 } 00:42:46.956 }, 00:42:46.956 { 00:42:46.956 "method": "bdev_iscsi_set_options", 00:42:46.956 "params": { 00:42:46.956 "timeout_sec": 30 00:42:46.956 } 00:42:46.956 }, 00:42:46.956 { 00:42:46.956 "method": "bdev_nvme_set_options", 00:42:46.956 "params": { 00:42:46.956 "action_on_timeout": "none", 00:42:46.956 "allow_accel_sequence": false, 00:42:46.956 "arbitration_burst": 0, 00:42:46.956 "bdev_retry_count": 3, 00:42:46.956 "ctrlr_loss_timeout_sec": 0, 00:42:46.956 "delay_cmd_submit": true, 00:42:46.956 "fast_io_fail_timeout_sec": 0, 00:42:46.956 "generate_uuids": false, 00:42:46.956 "high_priority_weight": 0, 00:42:46.956 "io_path_stat": false, 00:42:46.956 "io_queue_requests": 512, 00:42:46.956 "keep_alive_timeout_ms": 10000, 00:42:46.956 "low_priority_weight": 0, 00:42:46.956 "medium_priority_weight": 0, 00:42:46.956 "nvme_adminq_poll_period_us": 10000, 00:42:46.956 "nvme_ioq_poll_period_us": 0, 00:42:46.956 "reconnect_delay_sec": 0, 00:42:46.956 "timeout_admin_us": 0, 00:42:46.956 "timeout_us": 0, 00:42:46.956 "transport_ack_timeout": 0, 00:42:46.956 "transport_retry_count": 4, 00:42:46.956 "transport_tos": 0 00:42:46.956 } 00:42:46.956 }, 00:42:46.956 { 00:42:46.956 "method": "bdev_nvme_attach_controller", 00:42:46.956 "params": { 00:42:46.956 "adrfam": "IPv4", 00:42:46.956 "ctrlr_loss_timeout_sec": 0, 00:42:46.956 "ddgst": false, 00:42:46.956 "fast_io_fail_timeout_sec": 0, 00:42:46.956 "hdgst": false, 00:42:46.956 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:46.956 "name": "TLSTEST", 00:42:46.956 "prchk_guard": false, 00:42:46.956 "prchk_reftag": false, 00:42:46.956 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:42:46.956 "reconnect_delay_sec": 0, 00:42:46.956 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:46.956 "traddr": "10.0.0.2", 00:42:46.956 "trsvcid": "4420", 00:42:46.956 "trtype": "TCP" 00:42:46.956 } 00:42:46.956 }, 00:42:46.956 { 00:42:46.956 "method": "bdev_nvme_set_hotplug", 00:42:46.956 "params": { 00:42:46.956 "enable": false, 00:42:46.956 "period_us": 100000 00:42:46.956 } 00:42:46.956 }, 00:42:46.956 { 00:42:46.956 "method": "bdev_wait_for_examine" 00:42:46.956 } 00:42:46.956 ] 00:42:46.956 }, 00:42:46.956 { 00:42:46.956 "subsystem": "nbd", 00:42:46.956 "config": [] 00:42:46.956 } 00:42:46.956 ] 00:42:46.956 }' 00:42:46.956 13:05:06 -- target/tls.sh@208 -- # killprocess 88867 00:42:46.956 13:05:06 -- common/autotest_common.sh@926 -- # '[' -z 88867 ']' 00:42:46.956 13:05:06 -- common/autotest_common.sh@930 -- # kill -0 88867 00:42:46.956 13:05:06 -- common/autotest_common.sh@931 -- # uname 00:42:46.956 13:05:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:42:46.956 13:05:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 88867 00:42:46.956 13:05:06 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:42:46.956 13:05:06 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:42:46.956 killing process with pid 88867 00:42:46.956 13:05:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 88867' 00:42:46.956 13:05:06 -- common/autotest_common.sh@945 -- # kill 88867 00:42:46.956 Received shutdown signal, test time was about 10.000000 seconds 00:42:46.956 00:42:46.956 Latency(us) 00:42:46.956 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:46.956 =================================================================================================================== 00:42:46.956 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:42:46.956 13:05:06 -- common/autotest_common.sh@950 -- # wait 88867 00:42:47.215 13:05:06 -- target/tls.sh@209 -- # killprocess 88764 00:42:47.215 13:05:06 -- common/autotest_common.sh@926 -- # '[' -z 88764 ']' 00:42:47.215 13:05:06 -- common/autotest_common.sh@930 -- # kill -0 88764 00:42:47.215 13:05:06 -- common/autotest_common.sh@931 -- # uname 00:42:47.215 13:05:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:42:47.215 13:05:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 88764 00:42:47.215 13:05:06 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:42:47.215 13:05:06 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:42:47.215 killing process with pid 88764 00:42:47.215 13:05:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 88764' 00:42:47.215 13:05:06 -- common/autotest_common.sh@945 -- # kill 88764 00:42:47.215 13:05:06 -- common/autotest_common.sh@950 -- # wait 88764 00:42:47.475 13:05:06 -- target/tls.sh@212 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:42:47.475 13:05:06 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:42:47.475 13:05:06 -- common/autotest_common.sh@712 -- # xtrace_disable 00:42:47.475 13:05:06 -- target/tls.sh@212 -- # echo '{ 00:42:47.475 "subsystems": [ 00:42:47.475 { 00:42:47.475 "subsystem": "iobuf", 00:42:47.475 "config": [ 00:42:47.475 { 00:42:47.475 "method": "iobuf_set_options", 00:42:47.475 "params": { 00:42:47.475 "large_bufsize": 135168, 00:42:47.475 "large_pool_count": 1024, 00:42:47.475 "small_bufsize": 8192, 00:42:47.475 "small_pool_count": 8192 00:42:47.475 } 00:42:47.475 } 00:42:47.475 ] 00:42:47.475 }, 00:42:47.475 { 00:42:47.475 "subsystem": "sock", 00:42:47.475 "config": [ 00:42:47.475 { 00:42:47.475 "method": "sock_impl_set_options", 00:42:47.475 "params": { 00:42:47.475 "enable_ktls": false, 00:42:47.475 "enable_placement_id": 0, 00:42:47.475 "enable_quickack": false, 00:42:47.475 "enable_recv_pipe": true, 00:42:47.475 "enable_zerocopy_send_client": false, 00:42:47.475 "enable_zerocopy_send_server": true, 00:42:47.475 "impl_name": "posix", 00:42:47.475 "recv_buf_size": 2097152, 00:42:47.475 "send_buf_size": 2097152, 00:42:47.475 "tls_version": 0, 00:42:47.475 "zerocopy_threshold": 0 00:42:47.475 } 00:42:47.475 }, 00:42:47.475 { 00:42:47.475 "method": "sock_impl_set_options", 00:42:47.475 "params": { 00:42:47.475 "enable_ktls": false, 00:42:47.475 "enable_placement_id": 0, 00:42:47.475 "enable_quickack": false, 00:42:47.475 "enable_recv_pipe": true, 00:42:47.475 "enable_zerocopy_send_client": false, 00:42:47.475 "enable_zerocopy_send_server": true, 00:42:47.475 "impl_name": "ssl", 00:42:47.475 "recv_buf_size": 4096, 00:42:47.475 "send_buf_size": 4096, 00:42:47.475 "tls_version": 0, 00:42:47.475 "zerocopy_threshold": 0 00:42:47.475 } 00:42:47.475 } 00:42:47.475 ] 00:42:47.475 }, 00:42:47.475 { 00:42:47.475 "subsystem": "vmd", 00:42:47.475 "config": [] 00:42:47.475 }, 00:42:47.475 { 00:42:47.475 "subsystem": "accel", 00:42:47.475 "config": [ 00:42:47.475 { 00:42:47.475 "method": "accel_set_options", 00:42:47.475 "params": { 00:42:47.475 "buf_count": 2048, 00:42:47.475 "large_cache_size": 16, 00:42:47.475 "sequence_count": 2048, 00:42:47.475 "small_cache_size": 128, 00:42:47.475 "task_count": 2048 00:42:47.475 } 00:42:47.475 } 00:42:47.475 ] 00:42:47.475 }, 00:42:47.475 { 00:42:47.475 "subsystem": "bdev", 00:42:47.475 "config": [ 00:42:47.475 { 00:42:47.475 "method": "bdev_set_options", 00:42:47.475 "params": { 00:42:47.475 "bdev_auto_examine": true, 00:42:47.475 "bdev_io_cache_size": 256, 00:42:47.475 "bdev_io_pool_size": 65535, 00:42:47.475 "iobuf_large_cache_size": 16, 00:42:47.475 "iobuf_small_cache_size": 128 00:42:47.475 } 00:42:47.475 }, 00:42:47.475 { 00:42:47.475 "method": "bdev_raid_set_options", 00:42:47.475 "params": { 00:42:47.475 "process_window_size_kb": 1024 00:42:47.475 } 00:42:47.475 }, 00:42:47.475 { 00:42:47.475 "method": "bdev_iscsi_set_options", 00:42:47.475 "params": { 00:42:47.475 "timeout_sec": 30 00:42:47.475 } 00:42:47.475 }, 00:42:47.475 { 00:42:47.475 "method": "bdev_nvme_set_options", 00:42:47.475 "params": { 00:42:47.475 "action_on_timeout": "none", 00:42:47.475 "allow_accel_sequence": false, 00:42:47.475 "arbitration_burst": 0, 00:42:47.475 "bdev_retry_count": 3, 00:42:47.475 "ctrlr_loss_timeout_sec": 0, 00:42:47.475 "delay_cmd_submit": true, 00:42:47.475 "fast_io_fail_timeout_sec": 0, 00:42:47.475 "generate_uuids": false, 00:42:47.475 "high_priority_weight": 0, 00:42:47.475 "io_path_stat": false, 00:42:47.475 "io_queue_requests": 0, 00:42:47.475 "keep_alive_timeout_ms": 10000, 00:42:47.475 "low_priority_weight": 0, 00:42:47.475 "medium_priority_weight": 0, 00:42:47.475 "nvme_adminq_poll_period_us": 10000, 00:42:47.475 "nvme_ioq_poll_period_us": 0, 00:42:47.475 "reconnect_delay_sec": 0, 00:42:47.475 "timeout_admin_us": 0, 00:42:47.475 "timeout_us": 0, 00:42:47.475 "transport_ack_timeout": 0, 00:42:47.475 "transport_retry_count": 4, 00:42:47.475 "transport_tos": 0 00:42:47.475 } 00:42:47.475 }, 00:42:47.475 { 00:42:47.475 "method": "bdev_nvme_set_hotplug", 00:42:47.475 "params": { 00:42:47.475 "enable": false, 00:42:47.475 "period_us": 100000 00:42:47.475 } 00:42:47.475 }, 00:42:47.475 { 00:42:47.475 "method": "bdev_malloc_create", 00:42:47.475 "params": { 00:42:47.475 "block_size": 4096, 00:42:47.475 "name": "malloc0", 00:42:47.475 "num_blocks": 8192, 00:42:47.475 "optimal_io_boundary": 0, 00:42:47.475 "physical_block_size": 4096, 00:42:47.475 "uuid": "04d11f30-a5a7-4555-ba39-2c4cb1aedd69" 00:42:47.475 } 00:42:47.475 }, 00:42:47.475 { 00:42:47.475 "method": "bdev_wait_for_examine" 00:42:47.475 } 00:42:47.475 ] 00:42:47.475 }, 00:42:47.475 { 00:42:47.475 "subsystem": "nbd", 00:42:47.475 "config": [] 00:42:47.475 }, 00:42:47.475 { 00:42:47.475 "subsystem": "scheduler", 00:42:47.475 "config": [ 00:42:47.475 { 00:42:47.475 "method": "framework_set_scheduler", 00:42:47.475 "params": { 00:42:47.475 "name": "static" 00:42:47.475 } 00:42:47.475 } 00:42:47.475 ] 00:42:47.475 }, 00:42:47.475 { 00:42:47.475 "subsystem": "nvmf", 00:42:47.475 "config": [ 00:42:47.475 { 00:42:47.475 "method": "nvmf_set_config", 00:42:47.475 "params": { 00:42:47.475 "admin_cmd_passthru": { 00:42:47.475 "identify_ctrlr": false 00:42:47.475 }, 00:42:47.475 "discovery_filter": "match_any" 00:42:47.475 } 00:42:47.475 }, 00:42:47.475 { 00:42:47.475 "method": "nvmf_set_max_subsystems", 00:42:47.475 "params": { 00:42:47.475 "max_subsystems": 1024 00:42:47.475 } 00:42:47.475 }, 00:42:47.475 { 00:42:47.475 "method": "nvmf_set_crdt", 00:42:47.475 "params": { 00:42:47.475 "crdt1": 0, 00:42:47.475 "crdt2": 0, 00:42:47.475 "crdt3": 0 00:42:47.475 } 00:42:47.475 }, 00:42:47.475 { 00:42:47.475 "method": "nvmf_create_transport", 00:42:47.475 "params": { 00:42:47.475 "abort_timeout_sec": 1, 00:42:47.475 "buf_cache_size": 4294967295, 00:42:47.475 "c2h_success": false, 00:42:47.475 "dif_insert_or_strip": false, 00:42:47.475 "in_capsule_data_size": 4096, 00:42:47.475 "io_unit_size": 131072, 00:42:47.475 "max_aq_depth": 128, 00:42:47.475 "max_io_qpairs_per_ctrlr": 127, 00:42:47.475 "max_io_size": 131072, 00:42:47.475 "max_queue_depth": 128, 00:42:47.475 "num_shared_buffers": 511, 00:42:47.475 "sock_priority": 0, 00:42:47.475 "trtype": "TCP", 00:42:47.475 "zcopy": false 00:42:47.475 } 00:42:47.475 }, 00:42:47.475 { 00:42:47.476 "method": "nvmf_create_subsystem", 00:42:47.476 "params": { 00:42:47.476 "allow_any_host": false, 00:42:47.476 "ana_reporting": false, 00:42:47.476 "max_cntlid": 65519, 00:42:47.476 "max_namespaces": 10, 00:42:47.476 "min_cntlid": 1, 00:42:47.476 "model_number": "SPDK bdev Controller", 00:42:47.476 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:42:47.476 "serial_number": "SPDK00000000000001" 00:42:47.476 } 00:42:47.476 }, 00:42:47.476 { 00:42:47.476 "method": "nvmf_subsystem_add_host", 00:42:47.476 "params": { 00:42:47.476 "host": "nqn.2016-06.io.spdk:host1", 00:42:47.476 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:42:47.476 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:42:47.476 } 00:42:47.476 }, 00:42:47.476 { 00:42:47.476 "method": "nvmf_subsystem_add_ns", 00:42:47.476 "params": { 00:42:47.476 "namespace": { 00:42:47.476 "bdev_name": "malloc0", 00:42:47.476 "nguid": "04D11F30A5A74555BA392C4CB1AEDD69", 00:42:47.476 "nsid": 1, 00:42:47.476 "uuid": "04d11f30-a5a7-4555-ba39-2c4cb1aedd69" 00:42:47.476 }, 00:42:47.476 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:42:47.476 } 00:42:47.476 }, 00:42:47.476 { 00:42:47.476 "method": "nvmf_subsystem_add_listener", 00:42:47.476 "params": { 00:42:47.476 "listen_address": { 00:42:47.476 "adrfam": "IPv4", 00:42:47.476 "traddr": "10.0.0.2", 00:42:47.476 "trsvcid": "4420", 00:42:47.476 "trtype": "TCP" 00:42:47.476 }, 00:42:47.476 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:42:47.476 "secure_channel": true 00:42:47.476 } 00:42:47.476 } 00:42:47.476 ] 00:42:47.476 } 00:42:47.476 ] 00:42:47.476 }' 00:42:47.476 13:05:06 -- common/autotest_common.sh@10 -- # set +x 00:42:47.476 13:05:06 -- nvmf/common.sh@469 -- # nvmfpid=88940 00:42:47.476 13:05:06 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:42:47.476 13:05:06 -- nvmf/common.sh@470 -- # waitforlisten 88940 00:42:47.476 13:05:06 -- common/autotest_common.sh@819 -- # '[' -z 88940 ']' 00:42:47.476 13:05:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:47.476 13:05:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:42:47.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:47.476 13:05:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:47.476 13:05:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:42:47.476 13:05:06 -- common/autotest_common.sh@10 -- # set +x 00:42:47.476 [2024-07-22 13:05:06.802549] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:42:47.476 [2024-07-22 13:05:06.802634] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:47.735 [2024-07-22 13:05:06.932180] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:47.735 [2024-07-22 13:05:07.007753] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:42:47.735 [2024-07-22 13:05:07.007926] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:47.735 [2024-07-22 13:05:07.007965] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:47.735 [2024-07-22 13:05:07.007982] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:47.735 [2024-07-22 13:05:07.008016] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:42:47.994 [2024-07-22 13:05:07.222529] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:47.994 [2024-07-22 13:05:07.254462] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:42:47.994 [2024-07-22 13:05:07.254787] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:48.562 13:05:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:42:48.562 13:05:07 -- common/autotest_common.sh@852 -- # return 0 00:42:48.562 13:05:07 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:42:48.562 13:05:07 -- common/autotest_common.sh@718 -- # xtrace_disable 00:42:48.562 13:05:07 -- common/autotest_common.sh@10 -- # set +x 00:42:48.562 13:05:07 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:48.562 13:05:07 -- target/tls.sh@216 -- # bdevperf_pid=88984 00:42:48.562 13:05:07 -- target/tls.sh@217 -- # waitforlisten 88984 /var/tmp/bdevperf.sock 00:42:48.562 13:05:07 -- common/autotest_common.sh@819 -- # '[' -z 88984 ']' 00:42:48.562 13:05:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:42:48.562 13:05:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:42:48.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:42:48.562 13:05:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:42:48.562 13:05:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:42:48.562 13:05:07 -- common/autotest_common.sh@10 -- # set +x 00:42:48.562 13:05:07 -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:42:48.562 13:05:07 -- target/tls.sh@213 -- # echo '{ 00:42:48.562 "subsystems": [ 00:42:48.562 { 00:42:48.562 "subsystem": "iobuf", 00:42:48.562 "config": [ 00:42:48.562 { 00:42:48.562 "method": "iobuf_set_options", 00:42:48.562 "params": { 00:42:48.562 "large_bufsize": 135168, 00:42:48.562 "large_pool_count": 1024, 00:42:48.562 "small_bufsize": 8192, 00:42:48.562 "small_pool_count": 8192 00:42:48.562 } 00:42:48.562 } 00:42:48.562 ] 00:42:48.562 }, 00:42:48.562 { 00:42:48.562 "subsystem": "sock", 00:42:48.562 "config": [ 00:42:48.562 { 00:42:48.562 "method": "sock_impl_set_options", 00:42:48.562 "params": { 00:42:48.562 "enable_ktls": false, 00:42:48.562 "enable_placement_id": 0, 00:42:48.562 "enable_quickack": false, 00:42:48.562 "enable_recv_pipe": true, 00:42:48.562 "enable_zerocopy_send_client": false, 00:42:48.562 "enable_zerocopy_send_server": true, 00:42:48.562 "impl_name": "posix", 00:42:48.562 "recv_buf_size": 2097152, 00:42:48.562 "send_buf_size": 2097152, 00:42:48.562 "tls_version": 0, 00:42:48.562 "zerocopy_threshold": 0 00:42:48.562 } 00:42:48.562 }, 00:42:48.562 { 00:42:48.562 "method": "sock_impl_set_options", 00:42:48.562 "params": { 00:42:48.562 "enable_ktls": false, 00:42:48.562 "enable_placement_id": 0, 00:42:48.562 "enable_quickack": false, 00:42:48.562 "enable_recv_pipe": true, 00:42:48.562 "enable_zerocopy_send_client": false, 00:42:48.562 "enable_zerocopy_send_server": true, 00:42:48.562 "impl_name": "ssl", 00:42:48.562 "recv_buf_size": 4096, 00:42:48.562 "send_buf_size": 4096, 00:42:48.562 "tls_version": 0, 00:42:48.562 "zerocopy_threshold": 0 00:42:48.562 } 00:42:48.562 } 00:42:48.562 ] 00:42:48.562 }, 00:42:48.562 { 00:42:48.562 "subsystem": "vmd", 00:42:48.562 "config": [] 00:42:48.562 }, 00:42:48.562 { 00:42:48.562 "subsystem": "accel", 00:42:48.562 "config": [ 00:42:48.562 { 00:42:48.562 "method": "accel_set_options", 00:42:48.562 "params": { 00:42:48.562 "buf_count": 2048, 00:42:48.562 "large_cache_size": 16, 00:42:48.562 "sequence_count": 2048, 00:42:48.562 "small_cache_size": 128, 00:42:48.562 "task_count": 2048 00:42:48.562 } 00:42:48.562 } 00:42:48.562 ] 00:42:48.562 }, 00:42:48.562 { 00:42:48.562 "subsystem": "bdev", 00:42:48.562 "config": [ 00:42:48.562 { 00:42:48.562 "method": "bdev_set_options", 00:42:48.562 "params": { 00:42:48.562 "bdev_auto_examine": true, 00:42:48.562 "bdev_io_cache_size": 256, 00:42:48.562 "bdev_io_pool_size": 65535, 00:42:48.562 "iobuf_large_cache_size": 16, 00:42:48.562 "iobuf_small_cache_size": 128 00:42:48.562 } 00:42:48.562 }, 00:42:48.562 { 00:42:48.562 "method": "bdev_raid_set_options", 00:42:48.562 "params": { 00:42:48.562 "process_window_size_kb": 1024 00:42:48.562 } 00:42:48.562 }, 00:42:48.562 { 00:42:48.562 "method": "bdev_iscsi_set_options", 00:42:48.562 "params": { 00:42:48.562 "timeout_sec": 30 00:42:48.562 } 00:42:48.562 }, 00:42:48.562 { 00:42:48.562 "method": "bdev_nvme_set_options", 00:42:48.562 "params": { 00:42:48.562 "action_on_timeout": "none", 00:42:48.562 "allow_accel_sequence": false, 00:42:48.562 "arbitration_burst": 0, 00:42:48.562 "bdev_retry_count": 3, 00:42:48.562 "ctrlr_loss_timeout_sec": 0, 00:42:48.562 "delay_cmd_submit": true, 00:42:48.562 "fast_io_fail_timeout_sec": 0, 00:42:48.562 "generate_uuids": false, 00:42:48.562 "high_priority_weight": 0, 00:42:48.562 "io_path_stat": false, 00:42:48.562 "io_queue_requests": 512, 00:42:48.562 "keep_alive_timeout_ms": 10000, 00:42:48.562 "low_priority_weight": 0, 00:42:48.562 "medium_priority_weight": 0, 00:42:48.562 "nvme_adminq_poll_period_us": 10000, 00:42:48.562 "nvme_ioq_poll_period_us": 0, 00:42:48.562 "reconnect_delay_sec": 0, 00:42:48.562 "timeout_admin_us": 0, 00:42:48.562 "timeout_us": 0, 00:42:48.562 "transport_ack_timeout": 0, 00:42:48.562 "transport_retry_count": 4, 00:42:48.562 "transport_tos": 0 00:42:48.562 } 00:42:48.562 }, 00:42:48.562 { 00:42:48.562 "method": "bdev_nvme_attach_controller", 00:42:48.562 "params": { 00:42:48.562 "adrfam": "IPv4", 00:42:48.562 "ctrlr_loss_timeout_sec": 0, 00:42:48.562 "ddgst": false, 00:42:48.562 "fast_io_fail_timeout_sec": 0, 00:42:48.563 "hdgst": false, 00:42:48.563 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:48.563 "name": "TLSTEST", 00:42:48.563 "prchk_guard": false, 00:42:48.563 "prchk_reftag": false, 00:42:48.563 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:42:48.563 "reconnect_delay_sec": 0, 00:42:48.563 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:48.563 "traddr": "10.0.0.2", 00:42:48.563 "trsvcid": "4420", 00:42:48.563 "trtype": "TCP" 00:42:48.563 } 00:42:48.563 }, 00:42:48.563 { 00:42:48.563 "method": "bdev_nvme_set_hotplug", 00:42:48.563 "params": { 00:42:48.563 "enable": false, 00:42:48.563 "period_us": 100000 00:42:48.563 } 00:42:48.563 }, 00:42:48.563 { 00:42:48.563 "method": "bdev_wait_for_examine" 00:42:48.563 } 00:42:48.563 ] 00:42:48.563 }, 00:42:48.563 { 00:42:48.563 "subsystem": "nbd", 00:42:48.563 "config": [] 00:42:48.563 } 00:42:48.563 ] 00:42:48.563 }' 00:42:48.563 [2024-07-22 13:05:07.799321] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:42:48.563 [2024-07-22 13:05:07.799404] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88984 ] 00:42:48.563 [2024-07-22 13:05:07.931802] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:48.822 [2024-07-22 13:05:07.990525] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:42:48.822 [2024-07-22 13:05:08.141787] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:42:49.389 13:05:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:42:49.389 13:05:08 -- common/autotest_common.sh@852 -- # return 0 00:42:49.389 13:05:08 -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:42:49.647 Running I/O for 10 seconds... 00:42:59.623 00:42:59.623 Latency(us) 00:42:59.623 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:59.623 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:42:59.624 Verification LBA range: start 0x0 length 0x2000 00:42:59.624 TLSTESTn1 : 10.02 6164.07 24.08 0.00 0.00 20730.57 4110.89 19422.49 00:42:59.624 =================================================================================================================== 00:42:59.624 Total : 6164.07 24.08 0.00 0.00 20730.57 4110.89 19422.49 00:42:59.624 0 00:42:59.624 13:05:18 -- target/tls.sh@222 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:42:59.624 13:05:18 -- target/tls.sh@223 -- # killprocess 88984 00:42:59.624 13:05:18 -- common/autotest_common.sh@926 -- # '[' -z 88984 ']' 00:42:59.624 13:05:18 -- common/autotest_common.sh@930 -- # kill -0 88984 00:42:59.624 13:05:18 -- common/autotest_common.sh@931 -- # uname 00:42:59.624 13:05:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:42:59.624 13:05:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 88984 00:42:59.624 13:05:18 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:42:59.624 killing process with pid 88984 00:42:59.624 13:05:18 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:42:59.624 13:05:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 88984' 00:42:59.624 Received shutdown signal, test time was about 10.000000 seconds 00:42:59.624 00:42:59.624 Latency(us) 00:42:59.624 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:59.624 =================================================================================================================== 00:42:59.624 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:42:59.624 13:05:18 -- common/autotest_common.sh@945 -- # kill 88984 00:42:59.624 13:05:18 -- common/autotest_common.sh@950 -- # wait 88984 00:42:59.882 13:05:19 -- target/tls.sh@224 -- # killprocess 88940 00:42:59.882 13:05:19 -- common/autotest_common.sh@926 -- # '[' -z 88940 ']' 00:42:59.882 13:05:19 -- common/autotest_common.sh@930 -- # kill -0 88940 00:42:59.882 13:05:19 -- common/autotest_common.sh@931 -- # uname 00:42:59.882 13:05:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:42:59.882 13:05:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 88940 00:42:59.882 13:05:19 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:42:59.882 killing process with pid 88940 00:42:59.882 13:05:19 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:42:59.882 13:05:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 88940' 00:42:59.882 13:05:19 -- common/autotest_common.sh@945 -- # kill 88940 00:42:59.882 13:05:19 -- common/autotest_common.sh@950 -- # wait 88940 00:43:00.141 13:05:19 -- target/tls.sh@226 -- # trap - SIGINT SIGTERM EXIT 00:43:00.141 13:05:19 -- target/tls.sh@227 -- # cleanup 00:43:00.141 13:05:19 -- target/tls.sh@15 -- # process_shm --id 0 00:43:00.141 13:05:19 -- common/autotest_common.sh@796 -- # type=--id 00:43:00.141 13:05:19 -- common/autotest_common.sh@797 -- # id=0 00:43:00.141 13:05:19 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:43:00.141 13:05:19 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:43:00.141 13:05:19 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:43:00.141 13:05:19 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:43:00.141 13:05:19 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:43:00.141 13:05:19 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:43:00.141 nvmf_trace.0 00:43:00.141 13:05:19 -- common/autotest_common.sh@811 -- # return 0 00:43:00.141 13:05:19 -- target/tls.sh@16 -- # killprocess 88984 00:43:00.141 13:05:19 -- common/autotest_common.sh@926 -- # '[' -z 88984 ']' 00:43:00.141 13:05:19 -- common/autotest_common.sh@930 -- # kill -0 88984 00:43:00.141 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (88984) - No such process 00:43:00.141 Process with pid 88984 is not found 00:43:00.141 13:05:19 -- common/autotest_common.sh@953 -- # echo 'Process with pid 88984 is not found' 00:43:00.141 13:05:19 -- target/tls.sh@17 -- # nvmftestfini 00:43:00.141 13:05:19 -- nvmf/common.sh@476 -- # nvmfcleanup 00:43:00.141 13:05:19 -- nvmf/common.sh@116 -- # sync 00:43:00.141 13:05:19 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:43:00.141 13:05:19 -- nvmf/common.sh@119 -- # set +e 00:43:00.141 13:05:19 -- nvmf/common.sh@120 -- # for i in {1..20} 00:43:00.141 13:05:19 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:43:00.141 rmmod nvme_tcp 00:43:00.141 rmmod nvme_fabrics 00:43:00.141 rmmod nvme_keyring 00:43:00.141 13:05:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:43:00.141 13:05:19 -- nvmf/common.sh@123 -- # set -e 00:43:00.141 13:05:19 -- nvmf/common.sh@124 -- # return 0 00:43:00.141 13:05:19 -- nvmf/common.sh@477 -- # '[' -n 88940 ']' 00:43:00.141 13:05:19 -- nvmf/common.sh@478 -- # killprocess 88940 00:43:00.141 13:05:19 -- common/autotest_common.sh@926 -- # '[' -z 88940 ']' 00:43:00.141 13:05:19 -- common/autotest_common.sh@930 -- # kill -0 88940 00:43:00.141 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (88940) - No such process 00:43:00.141 Process with pid 88940 is not found 00:43:00.141 13:05:19 -- common/autotest_common.sh@953 -- # echo 'Process with pid 88940 is not found' 00:43:00.141 13:05:19 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:43:00.141 13:05:19 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:43:00.141 13:05:19 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:43:00.141 13:05:19 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:43:00.141 13:05:19 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:43:00.141 13:05:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:00.141 13:05:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:43:00.141 13:05:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:00.141 13:05:19 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:43:00.141 13:05:19 -- target/tls.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:43:00.141 00:43:00.141 real 1m10.759s 00:43:00.141 user 1m48.310s 00:43:00.141 sys 0m25.162s 00:43:00.141 ************************************ 00:43:00.141 END TEST nvmf_tls 00:43:00.141 ************************************ 00:43:00.141 13:05:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:43:00.141 13:05:19 -- common/autotest_common.sh@10 -- # set +x 00:43:00.141 13:05:19 -- nvmf/nvmf.sh@60 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:43:00.141 13:05:19 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:43:00.141 13:05:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:43:00.141 13:05:19 -- common/autotest_common.sh@10 -- # set +x 00:43:00.141 ************************************ 00:43:00.141 START TEST nvmf_fips 00:43:00.141 ************************************ 00:43:00.141 13:05:19 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:43:00.401 * Looking for test storage... 00:43:00.401 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:43:00.401 13:05:19 -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:43:00.401 13:05:19 -- nvmf/common.sh@7 -- # uname -s 00:43:00.401 13:05:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:00.401 13:05:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:00.401 13:05:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:00.401 13:05:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:00.401 13:05:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:00.401 13:05:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:00.401 13:05:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:00.401 13:05:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:00.401 13:05:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:00.401 13:05:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:00.401 13:05:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:864ae4a3-cd96-439b-8ef2-6dab4d992115 00:43:00.401 13:05:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=864ae4a3-cd96-439b-8ef2-6dab4d992115 00:43:00.401 13:05:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:00.401 13:05:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:00.401 13:05:19 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:43:00.401 13:05:19 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:43:00.401 13:05:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:00.401 13:05:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:00.401 13:05:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:00.401 13:05:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:00.401 13:05:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:00.401 13:05:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:00.401 13:05:19 -- paths/export.sh@5 -- # export PATH 00:43:00.401 13:05:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:00.401 13:05:19 -- nvmf/common.sh@46 -- # : 0 00:43:00.401 13:05:19 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:43:00.401 13:05:19 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:43:00.401 13:05:19 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:43:00.401 13:05:19 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:00.401 13:05:19 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:00.401 13:05:19 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:43:00.401 13:05:19 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:43:00.401 13:05:19 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:43:00.401 13:05:19 -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:43:00.401 13:05:19 -- fips/fips.sh@89 -- # check_openssl_version 00:43:00.401 13:05:19 -- fips/fips.sh@83 -- # local target=3.0.0 00:43:00.401 13:05:19 -- fips/fips.sh@85 -- # openssl version 00:43:00.401 13:05:19 -- fips/fips.sh@85 -- # awk '{print $2}' 00:43:00.401 13:05:19 -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:43:00.401 13:05:19 -- scripts/common.sh@375 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:43:00.401 13:05:19 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:43:00.401 13:05:19 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:43:00.401 13:05:19 -- scripts/common.sh@335 -- # IFS=.-: 00:43:00.401 13:05:19 -- scripts/common.sh@335 -- # read -ra ver1 00:43:00.401 13:05:19 -- scripts/common.sh@336 -- # IFS=.-: 00:43:00.401 13:05:19 -- scripts/common.sh@336 -- # read -ra ver2 00:43:00.401 13:05:19 -- scripts/common.sh@337 -- # local 'op=>=' 00:43:00.401 13:05:19 -- scripts/common.sh@339 -- # ver1_l=3 00:43:00.401 13:05:19 -- scripts/common.sh@340 -- # ver2_l=3 00:43:00.401 13:05:19 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:43:00.401 13:05:19 -- scripts/common.sh@343 -- # case "$op" in 00:43:00.401 13:05:19 -- scripts/common.sh@347 -- # : 1 00:43:00.401 13:05:19 -- scripts/common.sh@363 -- # (( v = 0 )) 00:43:00.401 13:05:19 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:00.401 13:05:19 -- scripts/common.sh@364 -- # decimal 3 00:43:00.401 13:05:19 -- scripts/common.sh@352 -- # local d=3 00:43:00.401 13:05:19 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:43:00.401 13:05:19 -- scripts/common.sh@354 -- # echo 3 00:43:00.401 13:05:19 -- scripts/common.sh@364 -- # ver1[v]=3 00:43:00.401 13:05:19 -- scripts/common.sh@365 -- # decimal 3 00:43:00.401 13:05:19 -- scripts/common.sh@352 -- # local d=3 00:43:00.401 13:05:19 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:43:00.401 13:05:19 -- scripts/common.sh@354 -- # echo 3 00:43:00.401 13:05:19 -- scripts/common.sh@365 -- # ver2[v]=3 00:43:00.401 13:05:19 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:43:00.401 13:05:19 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:43:00.401 13:05:19 -- scripts/common.sh@363 -- # (( v++ )) 00:43:00.401 13:05:19 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:00.401 13:05:19 -- scripts/common.sh@364 -- # decimal 0 00:43:00.401 13:05:19 -- scripts/common.sh@352 -- # local d=0 00:43:00.401 13:05:19 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:43:00.401 13:05:19 -- scripts/common.sh@354 -- # echo 0 00:43:00.401 13:05:19 -- scripts/common.sh@364 -- # ver1[v]=0 00:43:00.401 13:05:19 -- scripts/common.sh@365 -- # decimal 0 00:43:00.401 13:05:19 -- scripts/common.sh@352 -- # local d=0 00:43:00.401 13:05:19 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:43:00.401 13:05:19 -- scripts/common.sh@354 -- # echo 0 00:43:00.401 13:05:19 -- scripts/common.sh@365 -- # ver2[v]=0 00:43:00.401 13:05:19 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:43:00.401 13:05:19 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:43:00.401 13:05:19 -- scripts/common.sh@363 -- # (( v++ )) 00:43:00.401 13:05:19 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:00.401 13:05:19 -- scripts/common.sh@364 -- # decimal 9 00:43:00.401 13:05:19 -- scripts/common.sh@352 -- # local d=9 00:43:00.401 13:05:19 -- scripts/common.sh@353 -- # [[ 9 =~ ^[0-9]+$ ]] 00:43:00.401 13:05:19 -- scripts/common.sh@354 -- # echo 9 00:43:00.401 13:05:19 -- scripts/common.sh@364 -- # ver1[v]=9 00:43:00.401 13:05:19 -- scripts/common.sh@365 -- # decimal 0 00:43:00.401 13:05:19 -- scripts/common.sh@352 -- # local d=0 00:43:00.401 13:05:19 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:43:00.401 13:05:19 -- scripts/common.sh@354 -- # echo 0 00:43:00.401 13:05:19 -- scripts/common.sh@365 -- # ver2[v]=0 00:43:00.401 13:05:19 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:43:00.401 13:05:19 -- scripts/common.sh@366 -- # return 0 00:43:00.401 13:05:19 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:43:00.401 13:05:19 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:43:00.401 13:05:19 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:43:00.401 13:05:19 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:43:00.401 13:05:19 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:43:00.401 13:05:19 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:43:00.401 13:05:19 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:43:00.402 13:05:19 -- fips/fips.sh@105 -- # export OPENSSL_FORCE_FIPS_MODE=build_openssl_config 00:43:00.402 13:05:19 -- fips/fips.sh@105 -- # OPENSSL_FORCE_FIPS_MODE=build_openssl_config 00:43:00.402 13:05:19 -- fips/fips.sh@114 -- # build_openssl_config 00:43:00.402 13:05:19 -- fips/fips.sh@37 -- # cat 00:43:00.402 13:05:19 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:43:00.402 13:05:19 -- fips/fips.sh@58 -- # cat - 00:43:00.402 13:05:19 -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:43:00.402 13:05:19 -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:43:00.402 13:05:19 -- fips/fips.sh@117 -- # mapfile -t providers 00:43:00.402 13:05:19 -- fips/fips.sh@117 -- # grep name 00:43:00.402 13:05:19 -- fips/fips.sh@117 -- # OPENSSL_CONF=spdk_fips.conf 00:43:00.402 13:05:19 -- fips/fips.sh@117 -- # openssl list -providers 00:43:00.402 13:05:19 -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:43:00.402 13:05:19 -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:43:00.402 13:05:19 -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:43:00.402 13:05:19 -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:43:00.402 13:05:19 -- common/autotest_common.sh@640 -- # local es=0 00:43:00.402 13:05:19 -- common/autotest_common.sh@642 -- # valid_exec_arg openssl md5 /dev/fd/62 00:43:00.402 13:05:19 -- common/autotest_common.sh@628 -- # local arg=openssl 00:43:00.402 13:05:19 -- fips/fips.sh@128 -- # : 00:43:00.402 13:05:19 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:43:00.402 13:05:19 -- common/autotest_common.sh@632 -- # type -t openssl 00:43:00.402 13:05:19 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:43:00.402 13:05:19 -- common/autotest_common.sh@634 -- # type -P openssl 00:43:00.402 13:05:19 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:43:00.402 13:05:19 -- common/autotest_common.sh@634 -- # arg=/usr/bin/openssl 00:43:00.402 13:05:19 -- common/autotest_common.sh@634 -- # [[ -x /usr/bin/openssl ]] 00:43:00.402 13:05:19 -- common/autotest_common.sh@643 -- # openssl md5 /dev/fd/62 00:43:00.402 Error setting digest 00:43:00.402 0042F9FC497F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:43:00.402 0042F9FC497F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:43:00.402 13:05:19 -- common/autotest_common.sh@643 -- # es=1 00:43:00.402 13:05:19 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:43:00.402 13:05:19 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:43:00.402 13:05:19 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:43:00.402 13:05:19 -- fips/fips.sh@131 -- # nvmftestinit 00:43:00.402 13:05:19 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:43:00.402 13:05:19 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:00.402 13:05:19 -- nvmf/common.sh@436 -- # prepare_net_devs 00:43:00.402 13:05:19 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:43:00.402 13:05:19 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:43:00.402 13:05:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:00.402 13:05:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:43:00.402 13:05:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:00.402 13:05:19 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:43:00.402 13:05:19 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:43:00.402 13:05:19 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:43:00.402 13:05:19 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:43:00.402 13:05:19 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:43:00.402 13:05:19 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:43:00.402 13:05:19 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:00.402 13:05:19 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:00.402 13:05:19 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:43:00.402 13:05:19 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:43:00.402 13:05:19 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:43:00.402 13:05:19 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:43:00.402 13:05:19 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:43:00.402 13:05:19 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:00.402 13:05:19 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:43:00.402 13:05:19 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:43:00.402 13:05:19 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:43:00.402 13:05:19 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:43:00.402 13:05:19 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:43:00.661 13:05:19 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:43:00.661 Cannot find device "nvmf_tgt_br" 00:43:00.661 13:05:19 -- nvmf/common.sh@154 -- # true 00:43:00.661 13:05:19 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:43:00.661 Cannot find device "nvmf_tgt_br2" 00:43:00.661 13:05:19 -- nvmf/common.sh@155 -- # true 00:43:00.661 13:05:19 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:43:00.661 13:05:19 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:43:00.661 Cannot find device "nvmf_tgt_br" 00:43:00.661 13:05:19 -- nvmf/common.sh@157 -- # true 00:43:00.661 13:05:19 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:43:00.661 Cannot find device "nvmf_tgt_br2" 00:43:00.661 13:05:19 -- nvmf/common.sh@158 -- # true 00:43:00.661 13:05:19 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:43:00.661 13:05:19 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:43:00.661 13:05:19 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:43:00.661 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:43:00.661 13:05:19 -- nvmf/common.sh@161 -- # true 00:43:00.661 13:05:19 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:43:00.661 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:43:00.661 13:05:19 -- nvmf/common.sh@162 -- # true 00:43:00.661 13:05:19 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:43:00.661 13:05:19 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:43:00.661 13:05:19 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:43:00.661 13:05:19 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:43:00.661 13:05:19 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:43:00.661 13:05:19 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:43:00.661 13:05:20 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:43:00.661 13:05:20 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:43:00.661 13:05:20 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:43:00.661 13:05:20 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:43:00.661 13:05:20 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:43:00.661 13:05:20 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:43:00.661 13:05:20 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:43:00.661 13:05:20 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:43:00.661 13:05:20 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:43:00.661 13:05:20 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:43:00.661 13:05:20 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:43:00.661 13:05:20 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:43:00.661 13:05:20 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:43:00.920 13:05:20 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:43:00.920 13:05:20 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:43:00.920 13:05:20 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:43:00.920 13:05:20 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:43:00.920 13:05:20 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:43:00.920 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:00.920 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:43:00.920 00:43:00.920 --- 10.0.0.2 ping statistics --- 00:43:00.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:00.920 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:43:00.920 13:05:20 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:43:00.920 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:43:00.920 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.035 ms 00:43:00.920 00:43:00.920 --- 10.0.0.3 ping statistics --- 00:43:00.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:00.920 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:43:00.920 13:05:20 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:43:00.920 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:00.920 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:43:00.920 00:43:00.920 --- 10.0.0.1 ping statistics --- 00:43:00.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:00.920 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:43:00.921 13:05:20 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:00.921 13:05:20 -- nvmf/common.sh@421 -- # return 0 00:43:00.921 13:05:20 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:43:00.921 13:05:20 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:00.921 13:05:20 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:43:00.921 13:05:20 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:43:00.921 13:05:20 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:00.921 13:05:20 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:43:00.921 13:05:20 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:43:00.921 13:05:20 -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:43:00.921 13:05:20 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:43:00.921 13:05:20 -- common/autotest_common.sh@712 -- # xtrace_disable 00:43:00.921 13:05:20 -- common/autotest_common.sh@10 -- # set +x 00:43:00.921 13:05:20 -- nvmf/common.sh@469 -- # nvmfpid=89341 00:43:00.921 13:05:20 -- nvmf/common.sh@470 -- # waitforlisten 89341 00:43:00.921 13:05:20 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:43:00.921 13:05:20 -- common/autotest_common.sh@819 -- # '[' -z 89341 ']' 00:43:00.921 13:05:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:00.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:00.921 13:05:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:43:00.921 13:05:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:00.921 13:05:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:43:00.921 13:05:20 -- common/autotest_common.sh@10 -- # set +x 00:43:00.921 [2024-07-22 13:05:20.242999] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:43:00.921 [2024-07-22 13:05:20.243081] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:01.180 [2024-07-22 13:05:20.384202] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:01.180 [2024-07-22 13:05:20.444760] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:43:01.180 [2024-07-22 13:05:20.444886] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:01.180 [2024-07-22 13:05:20.444897] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:01.180 [2024-07-22 13:05:20.444904] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:01.180 [2024-07-22 13:05:20.444932] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:43:02.116 13:05:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:43:02.116 13:05:21 -- common/autotest_common.sh@852 -- # return 0 00:43:02.116 13:05:21 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:43:02.116 13:05:21 -- common/autotest_common.sh@718 -- # xtrace_disable 00:43:02.116 13:05:21 -- common/autotest_common.sh@10 -- # set +x 00:43:02.116 13:05:21 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:02.116 13:05:21 -- fips/fips.sh@134 -- # trap cleanup EXIT 00:43:02.116 13:05:21 -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:43:02.116 13:05:21 -- fips/fips.sh@138 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:43:02.116 13:05:21 -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:43:02.116 13:05:21 -- fips/fips.sh@140 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:43:02.116 13:05:21 -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:43:02.116 13:05:21 -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:43:02.116 13:05:21 -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:43:02.116 [2024-07-22 13:05:21.464258] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:02.116 [2024-07-22 13:05:21.480245] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:43:02.116 [2024-07-22 13:05:21.480408] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:02.116 malloc0 00:43:02.116 13:05:21 -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:43:02.376 13:05:21 -- fips/fips.sh@148 -- # bdevperf_pid=89393 00:43:02.376 13:05:21 -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:43:02.376 13:05:21 -- fips/fips.sh@149 -- # waitforlisten 89393 /var/tmp/bdevperf.sock 00:43:02.376 13:05:21 -- common/autotest_common.sh@819 -- # '[' -z 89393 ']' 00:43:02.376 13:05:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:43:02.376 13:05:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:43:02.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:43:02.376 13:05:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:43:02.376 13:05:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:43:02.376 13:05:21 -- common/autotest_common.sh@10 -- # set +x 00:43:02.376 [2024-07-22 13:05:21.601220] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:43:02.376 [2024-07-22 13:05:21.601298] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89393 ] 00:43:02.376 [2024-07-22 13:05:21.737926] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:02.634 [2024-07-22 13:05:21.812219] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:43:03.202 13:05:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:43:03.202 13:05:22 -- common/autotest_common.sh@852 -- # return 0 00:43:03.202 13:05:22 -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:43:03.460 [2024-07-22 13:05:22.750029] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:43:03.460 TLSTESTn1 00:43:03.460 13:05:22 -- fips/fips.sh@155 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:43:03.721 Running I/O for 10 seconds... 00:43:13.733 00:43:13.733 Latency(us) 00:43:13.733 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:13.733 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:43:13.733 Verification LBA range: start 0x0 length 0x2000 00:43:13.733 TLSTESTn1 : 10.02 5671.09 22.15 0.00 0.00 22533.29 5183.30 19779.96 00:43:13.733 =================================================================================================================== 00:43:13.733 Total : 5671.09 22.15 0.00 0.00 22533.29 5183.30 19779.96 00:43:13.733 0 00:43:13.733 13:05:32 -- fips/fips.sh@1 -- # cleanup 00:43:13.733 13:05:32 -- fips/fips.sh@15 -- # process_shm --id 0 00:43:13.733 13:05:32 -- common/autotest_common.sh@796 -- # type=--id 00:43:13.733 13:05:32 -- common/autotest_common.sh@797 -- # id=0 00:43:13.734 13:05:32 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:43:13.734 13:05:32 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:43:13.734 13:05:33 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:43:13.734 13:05:33 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:43:13.734 13:05:33 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:43:13.734 13:05:33 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:43:13.734 nvmf_trace.0 00:43:13.734 13:05:33 -- common/autotest_common.sh@811 -- # return 0 00:43:13.734 13:05:33 -- fips/fips.sh@16 -- # killprocess 89393 00:43:13.734 13:05:33 -- common/autotest_common.sh@926 -- # '[' -z 89393 ']' 00:43:13.734 13:05:33 -- common/autotest_common.sh@930 -- # kill -0 89393 00:43:13.734 13:05:33 -- common/autotest_common.sh@931 -- # uname 00:43:13.734 13:05:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:43:13.734 13:05:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 89393 00:43:13.734 13:05:33 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:43:13.734 13:05:33 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:43:13.734 13:05:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 89393' 00:43:13.734 killing process with pid 89393 00:43:13.734 13:05:33 -- common/autotest_common.sh@945 -- # kill 89393 00:43:13.734 Received shutdown signal, test time was about 10.000000 seconds 00:43:13.734 00:43:13.734 Latency(us) 00:43:13.734 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:13.734 =================================================================================================================== 00:43:13.734 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:43:13.734 13:05:33 -- common/autotest_common.sh@950 -- # wait 89393 00:43:13.995 13:05:33 -- fips/fips.sh@17 -- # nvmftestfini 00:43:13.995 13:05:33 -- nvmf/common.sh@476 -- # nvmfcleanup 00:43:13.995 13:05:33 -- nvmf/common.sh@116 -- # sync 00:43:13.996 13:05:33 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:43:13.996 13:05:33 -- nvmf/common.sh@119 -- # set +e 00:43:13.996 13:05:33 -- nvmf/common.sh@120 -- # for i in {1..20} 00:43:13.996 13:05:33 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:43:13.996 rmmod nvme_tcp 00:43:13.996 rmmod nvme_fabrics 00:43:13.996 rmmod nvme_keyring 00:43:13.996 13:05:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:43:13.996 13:05:33 -- nvmf/common.sh@123 -- # set -e 00:43:13.996 13:05:33 -- nvmf/common.sh@124 -- # return 0 00:43:13.996 13:05:33 -- nvmf/common.sh@477 -- # '[' -n 89341 ']' 00:43:13.996 13:05:33 -- nvmf/common.sh@478 -- # killprocess 89341 00:43:13.996 13:05:33 -- common/autotest_common.sh@926 -- # '[' -z 89341 ']' 00:43:13.996 13:05:33 -- common/autotest_common.sh@930 -- # kill -0 89341 00:43:13.996 13:05:33 -- common/autotest_common.sh@931 -- # uname 00:43:13.996 13:05:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:43:13.996 13:05:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 89341 00:43:14.254 13:05:33 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:43:14.254 killing process with pid 89341 00:43:14.254 13:05:33 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:43:14.254 13:05:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 89341' 00:43:14.254 13:05:33 -- common/autotest_common.sh@945 -- # kill 89341 00:43:14.254 13:05:33 -- common/autotest_common.sh@950 -- # wait 89341 00:43:14.254 13:05:33 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:43:14.254 13:05:33 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:43:14.254 13:05:33 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:43:14.254 13:05:33 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:43:14.254 13:05:33 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:43:14.254 13:05:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:14.254 13:05:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:43:14.254 13:05:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:14.254 13:05:33 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:43:14.254 13:05:33 -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:43:14.254 ************************************ 00:43:14.254 END TEST nvmf_fips 00:43:14.254 ************************************ 00:43:14.254 00:43:14.254 real 0m14.108s 00:43:14.254 user 0m18.895s 00:43:14.254 sys 0m5.810s 00:43:14.254 13:05:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:43:14.254 13:05:33 -- common/autotest_common.sh@10 -- # set +x 00:43:14.513 13:05:33 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:43:14.513 13:05:33 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:43:14.513 13:05:33 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:43:14.513 13:05:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:43:14.513 13:05:33 -- common/autotest_common.sh@10 -- # set +x 00:43:14.513 ************************************ 00:43:14.513 START TEST nvmf_fuzz 00:43:14.513 ************************************ 00:43:14.513 13:05:33 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:43:14.513 * Looking for test storage... 00:43:14.513 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:43:14.513 13:05:33 -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:43:14.513 13:05:33 -- nvmf/common.sh@7 -- # uname -s 00:43:14.513 13:05:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:14.513 13:05:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:14.513 13:05:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:14.513 13:05:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:14.513 13:05:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:14.513 13:05:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:14.513 13:05:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:14.513 13:05:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:14.513 13:05:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:14.513 13:05:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:14.513 13:05:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:864ae4a3-cd96-439b-8ef2-6dab4d992115 00:43:14.513 13:05:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=864ae4a3-cd96-439b-8ef2-6dab4d992115 00:43:14.513 13:05:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:14.513 13:05:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:14.513 13:05:33 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:43:14.513 13:05:33 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:43:14.513 13:05:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:14.513 13:05:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:14.513 13:05:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:14.513 13:05:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:14.513 13:05:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:14.513 13:05:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:14.513 13:05:33 -- paths/export.sh@5 -- # export PATH 00:43:14.513 13:05:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:14.513 13:05:33 -- nvmf/common.sh@46 -- # : 0 00:43:14.513 13:05:33 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:43:14.513 13:05:33 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:43:14.513 13:05:33 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:43:14.513 13:05:33 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:14.513 13:05:33 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:14.513 13:05:33 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:43:14.513 13:05:33 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:43:14.513 13:05:33 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:43:14.513 13:05:33 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:43:14.513 13:05:33 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:43:14.513 13:05:33 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:14.513 13:05:33 -- nvmf/common.sh@436 -- # prepare_net_devs 00:43:14.513 13:05:33 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:43:14.513 13:05:33 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:43:14.513 13:05:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:14.513 13:05:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:43:14.513 13:05:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:14.513 13:05:33 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:43:14.513 13:05:33 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:43:14.513 13:05:33 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:43:14.513 13:05:33 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:43:14.513 13:05:33 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:43:14.513 13:05:33 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:43:14.513 13:05:33 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:14.513 13:05:33 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:14.513 13:05:33 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:43:14.514 13:05:33 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:43:14.514 13:05:33 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:43:14.514 13:05:33 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:43:14.514 13:05:33 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:43:14.514 13:05:33 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:14.514 13:05:33 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:43:14.514 13:05:33 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:43:14.514 13:05:33 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:43:14.514 13:05:33 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:43:14.514 13:05:33 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:43:14.514 13:05:33 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:43:14.514 Cannot find device "nvmf_tgt_br" 00:43:14.514 13:05:33 -- nvmf/common.sh@154 -- # true 00:43:14.514 13:05:33 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:43:14.514 Cannot find device "nvmf_tgt_br2" 00:43:14.514 13:05:33 -- nvmf/common.sh@155 -- # true 00:43:14.514 13:05:33 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:43:14.514 13:05:33 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:43:14.514 Cannot find device "nvmf_tgt_br" 00:43:14.514 13:05:33 -- nvmf/common.sh@157 -- # true 00:43:14.514 13:05:33 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:43:14.514 Cannot find device "nvmf_tgt_br2" 00:43:14.514 13:05:33 -- nvmf/common.sh@158 -- # true 00:43:14.514 13:05:33 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:43:14.514 13:05:33 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:43:14.772 13:05:33 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:43:14.772 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:43:14.772 13:05:33 -- nvmf/common.sh@161 -- # true 00:43:14.772 13:05:33 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:43:14.772 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:43:14.772 13:05:33 -- nvmf/common.sh@162 -- # true 00:43:14.772 13:05:33 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:43:14.772 13:05:33 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:43:14.772 13:05:33 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:43:14.772 13:05:33 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:43:14.772 13:05:33 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:43:14.772 13:05:33 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:43:14.772 13:05:34 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:43:14.772 13:05:34 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:43:14.772 13:05:34 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:43:14.772 13:05:34 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:43:14.772 13:05:34 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:43:14.772 13:05:34 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:43:14.772 13:05:34 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:43:14.772 13:05:34 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:43:14.772 13:05:34 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:43:14.772 13:05:34 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:43:14.772 13:05:34 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:43:14.772 13:05:34 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:43:14.772 13:05:34 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:43:14.772 13:05:34 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:43:14.772 13:05:34 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:43:14.772 13:05:34 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:43:14.772 13:05:34 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:43:14.772 13:05:34 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:43:14.772 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:14.772 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:43:14.772 00:43:14.772 --- 10.0.0.2 ping statistics --- 00:43:14.772 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:14.772 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:43:14.772 13:05:34 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:43:14.772 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:43:14.772 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:43:14.772 00:43:14.772 --- 10.0.0.3 ping statistics --- 00:43:14.772 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:14.772 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:43:14.772 13:05:34 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:43:14.772 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:14.772 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:43:14.772 00:43:14.772 --- 10.0.0.1 ping statistics --- 00:43:14.772 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:14.772 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:43:14.772 13:05:34 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:14.772 13:05:34 -- nvmf/common.sh@421 -- # return 0 00:43:14.772 13:05:34 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:43:14.772 13:05:34 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:14.772 13:05:34 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:43:14.772 13:05:34 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:43:14.772 13:05:34 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:14.772 13:05:34 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:43:14.772 13:05:34 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:43:14.772 13:05:34 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=89734 00:43:14.772 13:05:34 -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:43:14.772 13:05:34 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:43:14.772 13:05:34 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 89734 00:43:14.772 13:05:34 -- common/autotest_common.sh@819 -- # '[' -z 89734 ']' 00:43:14.772 13:05:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:14.772 13:05:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:43:14.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:14.773 13:05:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:14.773 13:05:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:43:14.773 13:05:34 -- common/autotest_common.sh@10 -- # set +x 00:43:16.147 13:05:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:43:16.147 13:05:35 -- common/autotest_common.sh@852 -- # return 0 00:43:16.147 13:05:35 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:43:16.147 13:05:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:16.147 13:05:35 -- common/autotest_common.sh@10 -- # set +x 00:43:16.147 13:05:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:16.147 13:05:35 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:43:16.147 13:05:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:16.147 13:05:35 -- common/autotest_common.sh@10 -- # set +x 00:43:16.147 Malloc0 00:43:16.147 13:05:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:16.147 13:05:35 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:43:16.147 13:05:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:16.147 13:05:35 -- common/autotest_common.sh@10 -- # set +x 00:43:16.147 13:05:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:16.147 13:05:35 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:43:16.147 13:05:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:16.147 13:05:35 -- common/autotest_common.sh@10 -- # set +x 00:43:16.147 13:05:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:16.147 13:05:35 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:16.147 13:05:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:16.147 13:05:35 -- common/autotest_common.sh@10 -- # set +x 00:43:16.147 13:05:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:16.147 13:05:35 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:43:16.147 13:05:35 -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:43:16.147 Shutting down the fuzz application 00:43:16.147 13:05:35 -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:43:16.406 Shutting down the fuzz application 00:43:16.406 13:05:35 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:43:16.406 13:05:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:16.406 13:05:35 -- common/autotest_common.sh@10 -- # set +x 00:43:16.406 13:05:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:16.406 13:05:35 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:43:16.406 13:05:35 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:43:16.406 13:05:35 -- nvmf/common.sh@476 -- # nvmfcleanup 00:43:16.406 13:05:35 -- nvmf/common.sh@116 -- # sync 00:43:16.665 13:05:35 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:43:16.665 13:05:35 -- nvmf/common.sh@119 -- # set +e 00:43:16.665 13:05:35 -- nvmf/common.sh@120 -- # for i in {1..20} 00:43:16.665 13:05:35 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:43:16.665 rmmod nvme_tcp 00:43:16.665 rmmod nvme_fabrics 00:43:16.665 rmmod nvme_keyring 00:43:16.665 13:05:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:43:16.665 13:05:35 -- nvmf/common.sh@123 -- # set -e 00:43:16.665 13:05:35 -- nvmf/common.sh@124 -- # return 0 00:43:16.665 13:05:35 -- nvmf/common.sh@477 -- # '[' -n 89734 ']' 00:43:16.665 13:05:35 -- nvmf/common.sh@478 -- # killprocess 89734 00:43:16.665 13:05:35 -- common/autotest_common.sh@926 -- # '[' -z 89734 ']' 00:43:16.665 13:05:35 -- common/autotest_common.sh@930 -- # kill -0 89734 00:43:16.665 13:05:35 -- common/autotest_common.sh@931 -- # uname 00:43:16.665 13:05:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:43:16.665 13:05:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 89734 00:43:16.665 13:05:35 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:43:16.665 13:05:35 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:43:16.665 killing process with pid 89734 00:43:16.665 13:05:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 89734' 00:43:16.665 13:05:35 -- common/autotest_common.sh@945 -- # kill 89734 00:43:16.665 13:05:35 -- common/autotest_common.sh@950 -- # wait 89734 00:43:16.924 13:05:36 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:43:16.924 13:05:36 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:43:16.924 13:05:36 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:43:16.924 13:05:36 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:43:16.924 13:05:36 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:43:16.924 13:05:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:16.924 13:05:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:43:16.924 13:05:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:16.924 13:05:36 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:43:16.924 13:05:36 -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:43:16.924 00:43:16.924 real 0m2.487s 00:43:16.924 user 0m2.425s 00:43:16.924 sys 0m0.643s 00:43:16.924 13:05:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:43:16.924 13:05:36 -- common/autotest_common.sh@10 -- # set +x 00:43:16.924 ************************************ 00:43:16.924 END TEST nvmf_fuzz 00:43:16.924 ************************************ 00:43:16.924 13:05:36 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:43:16.924 13:05:36 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:43:16.924 13:05:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:43:16.924 13:05:36 -- common/autotest_common.sh@10 -- # set +x 00:43:16.924 ************************************ 00:43:16.924 START TEST nvmf_multiconnection 00:43:16.924 ************************************ 00:43:16.924 13:05:36 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:43:16.924 * Looking for test storage... 00:43:16.925 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:43:16.925 13:05:36 -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:43:16.925 13:05:36 -- nvmf/common.sh@7 -- # uname -s 00:43:16.925 13:05:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:16.925 13:05:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:16.925 13:05:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:16.925 13:05:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:16.925 13:05:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:16.925 13:05:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:16.925 13:05:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:16.925 13:05:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:16.925 13:05:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:16.925 13:05:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:16.925 13:05:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:864ae4a3-cd96-439b-8ef2-6dab4d992115 00:43:16.925 13:05:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=864ae4a3-cd96-439b-8ef2-6dab4d992115 00:43:16.925 13:05:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:16.925 13:05:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:16.925 13:05:36 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:43:16.925 13:05:36 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:43:16.925 13:05:36 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:16.925 13:05:36 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:16.925 13:05:36 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:16.925 13:05:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:16.925 13:05:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:16.925 13:05:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:16.925 13:05:36 -- paths/export.sh@5 -- # export PATH 00:43:16.925 13:05:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:16.925 13:05:36 -- nvmf/common.sh@46 -- # : 0 00:43:16.925 13:05:36 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:43:16.925 13:05:36 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:43:16.925 13:05:36 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:43:16.925 13:05:36 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:16.925 13:05:36 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:16.925 13:05:36 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:43:16.925 13:05:36 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:43:16.925 13:05:36 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:43:16.925 13:05:36 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:43:16.925 13:05:36 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:43:16.925 13:05:36 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:43:16.925 13:05:36 -- target/multiconnection.sh@16 -- # nvmftestinit 00:43:16.925 13:05:36 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:43:16.925 13:05:36 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:16.925 13:05:36 -- nvmf/common.sh@436 -- # prepare_net_devs 00:43:16.925 13:05:36 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:43:16.925 13:05:36 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:43:16.925 13:05:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:16.925 13:05:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:43:16.925 13:05:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:17.184 13:05:36 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:43:17.184 13:05:36 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:43:17.184 13:05:36 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:43:17.184 13:05:36 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:43:17.184 13:05:36 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:43:17.184 13:05:36 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:43:17.184 13:05:36 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:17.184 13:05:36 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:17.184 13:05:36 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:43:17.184 13:05:36 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:43:17.184 13:05:36 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:43:17.184 13:05:36 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:43:17.184 13:05:36 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:43:17.184 13:05:36 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:17.184 13:05:36 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:43:17.184 13:05:36 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:43:17.184 13:05:36 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:43:17.184 13:05:36 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:43:17.184 13:05:36 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:43:17.184 13:05:36 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:43:17.184 Cannot find device "nvmf_tgt_br" 00:43:17.184 13:05:36 -- nvmf/common.sh@154 -- # true 00:43:17.184 13:05:36 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:43:17.184 Cannot find device "nvmf_tgt_br2" 00:43:17.184 13:05:36 -- nvmf/common.sh@155 -- # true 00:43:17.184 13:05:36 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:43:17.184 13:05:36 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:43:17.184 Cannot find device "nvmf_tgt_br" 00:43:17.184 13:05:36 -- nvmf/common.sh@157 -- # true 00:43:17.184 13:05:36 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:43:17.184 Cannot find device "nvmf_tgt_br2" 00:43:17.184 13:05:36 -- nvmf/common.sh@158 -- # true 00:43:17.184 13:05:36 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:43:17.184 13:05:36 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:43:17.184 13:05:36 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:43:17.184 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:43:17.184 13:05:36 -- nvmf/common.sh@161 -- # true 00:43:17.184 13:05:36 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:43:17.184 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:43:17.184 13:05:36 -- nvmf/common.sh@162 -- # true 00:43:17.184 13:05:36 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:43:17.184 13:05:36 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:43:17.184 13:05:36 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:43:17.184 13:05:36 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:43:17.184 13:05:36 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:43:17.184 13:05:36 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:43:17.184 13:05:36 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:43:17.184 13:05:36 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:43:17.184 13:05:36 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:43:17.184 13:05:36 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:43:17.184 13:05:36 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:43:17.184 13:05:36 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:43:17.184 13:05:36 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:43:17.184 13:05:36 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:43:17.184 13:05:36 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:43:17.184 13:05:36 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:43:17.442 13:05:36 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:43:17.442 13:05:36 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:43:17.442 13:05:36 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:43:17.442 13:05:36 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:43:17.442 13:05:36 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:43:17.442 13:05:36 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:43:17.442 13:05:36 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:43:17.442 13:05:36 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:43:17.442 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:17.442 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:43:17.442 00:43:17.442 --- 10.0.0.2 ping statistics --- 00:43:17.442 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:17.442 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:43:17.442 13:05:36 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:43:17.442 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:43:17.442 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:43:17.443 00:43:17.443 --- 10.0.0.3 ping statistics --- 00:43:17.443 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:17.443 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:43:17.443 13:05:36 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:43:17.443 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:17.443 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:43:17.443 00:43:17.443 --- 10.0.0.1 ping statistics --- 00:43:17.443 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:17.443 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:43:17.443 13:05:36 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:17.443 13:05:36 -- nvmf/common.sh@421 -- # return 0 00:43:17.443 13:05:36 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:43:17.443 13:05:36 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:17.443 13:05:36 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:43:17.443 13:05:36 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:43:17.443 13:05:36 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:17.443 13:05:36 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:43:17.443 13:05:36 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:43:17.443 13:05:36 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:43:17.443 13:05:36 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:43:17.443 13:05:36 -- common/autotest_common.sh@712 -- # xtrace_disable 00:43:17.443 13:05:36 -- common/autotest_common.sh@10 -- # set +x 00:43:17.443 13:05:36 -- nvmf/common.sh@469 -- # nvmfpid=89937 00:43:17.443 13:05:36 -- nvmf/common.sh@470 -- # waitforlisten 89937 00:43:17.443 13:05:36 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:43:17.443 13:05:36 -- common/autotest_common.sh@819 -- # '[' -z 89937 ']' 00:43:17.443 13:05:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:17.443 13:05:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:43:17.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:17.443 13:05:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:17.443 13:05:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:43:17.443 13:05:36 -- common/autotest_common.sh@10 -- # set +x 00:43:17.443 [2024-07-22 13:05:36.766561] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:43:17.443 [2024-07-22 13:05:36.766649] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:17.702 [2024-07-22 13:05:36.904545] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:43:17.702 [2024-07-22 13:05:36.969339] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:43:17.702 [2024-07-22 13:05:36.969493] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:17.702 [2024-07-22 13:05:36.969506] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:17.702 [2024-07-22 13:05:36.969529] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:17.702 [2024-07-22 13:05:36.969708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:43:17.702 [2024-07-22 13:05:36.969850] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:43:17.702 [2024-07-22 13:05:36.969959] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:43:17.702 [2024-07-22 13:05:36.969960] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:43:18.639 13:05:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:43:18.639 13:05:37 -- common/autotest_common.sh@852 -- # return 0 00:43:18.639 13:05:37 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:43:18.639 13:05:37 -- common/autotest_common.sh@718 -- # xtrace_disable 00:43:18.639 13:05:37 -- common/autotest_common.sh@10 -- # set +x 00:43:18.639 13:05:37 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:18.640 13:05:37 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:43:18.640 13:05:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:18.640 13:05:37 -- common/autotest_common.sh@10 -- # set +x 00:43:18.640 [2024-07-22 13:05:37.786705] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:18.640 13:05:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:18.640 13:05:37 -- target/multiconnection.sh@21 -- # seq 1 11 00:43:18.640 13:05:37 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:43:18.640 13:05:37 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:43:18.640 13:05:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:18.640 13:05:37 -- common/autotest_common.sh@10 -- # set +x 00:43:18.640 Malloc1 00:43:18.640 13:05:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:18.640 13:05:37 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:43:18.640 13:05:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:18.640 13:05:37 -- common/autotest_common.sh@10 -- # set +x 00:43:18.640 13:05:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:18.640 13:05:37 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:43:18.640 13:05:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:18.640 13:05:37 -- common/autotest_common.sh@10 -- # set +x 00:43:18.640 13:05:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:18.640 13:05:37 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:18.640 13:05:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:18.640 13:05:37 -- common/autotest_common.sh@10 -- # set +x 00:43:18.640 [2024-07-22 13:05:37.853952] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:18.640 13:05:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:18.640 13:05:37 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:43:18.640 13:05:37 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:43:18.640 13:05:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:18.640 13:05:37 -- common/autotest_common.sh@10 -- # set +x 00:43:18.640 Malloc2 00:43:18.640 13:05:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:18.640 13:05:37 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:43:18.640 13:05:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:18.640 13:05:37 -- common/autotest_common.sh@10 -- # set +x 00:43:18.640 13:05:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:18.640 13:05:37 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:43:18.640 13:05:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:18.640 13:05:37 -- common/autotest_common.sh@10 -- # set +x 00:43:18.640 13:05:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:18.640 13:05:37 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:43:18.640 13:05:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:18.640 13:05:37 -- common/autotest_common.sh@10 -- # set +x 00:43:18.640 13:05:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:18.640 13:05:37 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:43:18.640 13:05:37 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:43:18.640 13:05:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:18.640 13:05:37 -- common/autotest_common.sh@10 -- # set +x 00:43:18.640 Malloc3 00:43:18.640 13:05:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:18.640 13:05:37 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:43:18.640 13:05:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:18.640 13:05:37 -- common/autotest_common.sh@10 -- # set +x 00:43:18.640 13:05:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:18.640 13:05:37 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:43:18.640 13:05:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:18.640 13:05:37 -- common/autotest_common.sh@10 -- # set +x 00:43:18.640 13:05:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:18.640 13:05:37 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:43:18.640 13:05:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:18.640 13:05:37 -- common/autotest_common.sh@10 -- # set +x 00:43:18.640 13:05:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:18.640 13:05:37 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:43:18.640 13:05:37 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:43:18.640 13:05:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:18.640 13:05:37 -- common/autotest_common.sh@10 -- # set +x 00:43:18.640 Malloc4 00:43:18.640 13:05:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:18.640 13:05:37 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:43:18.640 13:05:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:18.640 13:05:37 -- common/autotest_common.sh@10 -- # set +x 00:43:18.640 13:05:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:18.640 13:05:37 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:43:18.640 13:05:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:18.640 13:05:37 -- common/autotest_common.sh@10 -- # set +x 00:43:18.640 13:05:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:18.640 13:05:37 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:43:18.640 13:05:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:18.640 13:05:37 -- common/autotest_common.sh@10 -- # set +x 00:43:18.640 13:05:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:18.640 13:05:37 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:43:18.640 13:05:37 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:43:18.640 13:05:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:18.640 13:05:37 -- common/autotest_common.sh@10 -- # set +x 00:43:18.640 Malloc5 00:43:18.640 13:05:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:18.640 13:05:38 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:43:18.640 13:05:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:18.640 13:05:38 -- common/autotest_common.sh@10 -- # set +x 00:43:18.640 13:05:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:18.640 13:05:38 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:43:18.640 13:05:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:18.640 13:05:38 -- common/autotest_common.sh@10 -- # set +x 00:43:18.640 13:05:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:18.640 13:05:38 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:43:18.640 13:05:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:18.640 13:05:38 -- common/autotest_common.sh@10 -- # set +x 00:43:18.640 13:05:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:18.640 13:05:38 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:43:18.640 13:05:38 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:43:18.640 13:05:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:18.640 13:05:38 -- common/autotest_common.sh@10 -- # set +x 00:43:18.899 Malloc6 00:43:18.900 13:05:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:18.900 13:05:38 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:43:18.900 13:05:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:18.900 13:05:38 -- common/autotest_common.sh@10 -- # set +x 00:43:18.900 13:05:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:18.900 13:05:38 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:43:18.900 13:05:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:18.900 13:05:38 -- common/autotest_common.sh@10 -- # set +x 00:43:18.900 13:05:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:18.900 13:05:38 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:43:18.900 13:05:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:18.900 13:05:38 -- common/autotest_common.sh@10 -- # set +x 00:43:18.900 13:05:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:18.900 13:05:38 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:43:18.900 13:05:38 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:43:18.900 13:05:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:18.900 13:05:38 -- common/autotest_common.sh@10 -- # set +x 00:43:18.900 Malloc7 00:43:18.900 13:05:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:18.900 13:05:38 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:43:18.900 13:05:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:18.900 13:05:38 -- common/autotest_common.sh@10 -- # set +x 00:43:18.900 13:05:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:18.900 13:05:38 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:43:18.900 13:05:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:18.900 13:05:38 -- common/autotest_common.sh@10 -- # set +x 00:43:18.900 13:05:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:18.900 13:05:38 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:43:18.900 13:05:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:18.900 13:05:38 -- common/autotest_common.sh@10 -- # set +x 00:43:18.900 13:05:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:18.900 13:05:38 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:43:18.900 13:05:38 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:43:18.900 13:05:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:18.900 13:05:38 -- common/autotest_common.sh@10 -- # set +x 00:43:18.900 Malloc8 00:43:18.900 13:05:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:18.900 13:05:38 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:43:18.900 13:05:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:18.900 13:05:38 -- common/autotest_common.sh@10 -- # set +x 00:43:18.900 13:05:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:18.900 13:05:38 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:43:18.900 13:05:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:18.900 13:05:38 -- common/autotest_common.sh@10 -- # set +x 00:43:18.900 13:05:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:18.900 13:05:38 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:43:18.900 13:05:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:18.900 13:05:38 -- common/autotest_common.sh@10 -- # set +x 00:43:18.900 13:05:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:18.900 13:05:38 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:43:18.900 13:05:38 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:43:18.900 13:05:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:18.900 13:05:38 -- common/autotest_common.sh@10 -- # set +x 00:43:18.900 Malloc9 00:43:18.900 13:05:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:18.900 13:05:38 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:43:18.900 13:05:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:18.900 13:05:38 -- common/autotest_common.sh@10 -- # set +x 00:43:18.900 13:05:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:18.900 13:05:38 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:43:18.900 13:05:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:18.900 13:05:38 -- common/autotest_common.sh@10 -- # set +x 00:43:18.900 13:05:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:18.900 13:05:38 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:43:18.900 13:05:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:18.900 13:05:38 -- common/autotest_common.sh@10 -- # set +x 00:43:18.900 13:05:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:18.900 13:05:38 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:43:18.900 13:05:38 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:43:18.900 13:05:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:18.900 13:05:38 -- common/autotest_common.sh@10 -- # set +x 00:43:18.900 Malloc10 00:43:18.900 13:05:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:18.900 13:05:38 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:43:18.900 13:05:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:18.900 13:05:38 -- common/autotest_common.sh@10 -- # set +x 00:43:18.900 13:05:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:18.900 13:05:38 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:43:18.900 13:05:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:18.900 13:05:38 -- common/autotest_common.sh@10 -- # set +x 00:43:18.900 13:05:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:18.900 13:05:38 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:43:18.900 13:05:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:18.900 13:05:38 -- common/autotest_common.sh@10 -- # set +x 00:43:18.900 13:05:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:18.900 13:05:38 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:43:18.900 13:05:38 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:43:18.900 13:05:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:18.900 13:05:38 -- common/autotest_common.sh@10 -- # set +x 00:43:18.900 Malloc11 00:43:18.900 13:05:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:18.900 13:05:38 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:43:18.900 13:05:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:18.900 13:05:38 -- common/autotest_common.sh@10 -- # set +x 00:43:18.900 13:05:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:18.900 13:05:38 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:43:18.900 13:05:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:18.900 13:05:38 -- common/autotest_common.sh@10 -- # set +x 00:43:19.159 13:05:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:19.159 13:05:38 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:43:19.159 13:05:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:19.159 13:05:38 -- common/autotest_common.sh@10 -- # set +x 00:43:19.159 13:05:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:19.159 13:05:38 -- target/multiconnection.sh@28 -- # seq 1 11 00:43:19.159 13:05:38 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:43:19.159 13:05:38 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:864ae4a3-cd96-439b-8ef2-6dab4d992115 --hostid=864ae4a3-cd96-439b-8ef2-6dab4d992115 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:43:19.159 13:05:38 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:43:19.159 13:05:38 -- common/autotest_common.sh@1177 -- # local i=0 00:43:19.159 13:05:38 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:43:19.159 13:05:38 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:43:19.159 13:05:38 -- common/autotest_common.sh@1184 -- # sleep 2 00:43:21.694 13:05:40 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:43:21.694 13:05:40 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:43:21.694 13:05:40 -- common/autotest_common.sh@1186 -- # grep -c SPDK1 00:43:21.694 13:05:40 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:43:21.694 13:05:40 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:43:21.694 13:05:40 -- common/autotest_common.sh@1187 -- # return 0 00:43:21.694 13:05:40 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:43:21.694 13:05:40 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:864ae4a3-cd96-439b-8ef2-6dab4d992115 --hostid=864ae4a3-cd96-439b-8ef2-6dab4d992115 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:43:21.694 13:05:40 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:43:21.694 13:05:40 -- common/autotest_common.sh@1177 -- # local i=0 00:43:21.694 13:05:40 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:43:21.694 13:05:40 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:43:21.694 13:05:40 -- common/autotest_common.sh@1184 -- # sleep 2 00:43:23.598 13:05:42 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:43:23.598 13:05:42 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:43:23.598 13:05:42 -- common/autotest_common.sh@1186 -- # grep -c SPDK2 00:43:23.598 13:05:42 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:43:23.598 13:05:42 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:43:23.598 13:05:42 -- common/autotest_common.sh@1187 -- # return 0 00:43:23.598 13:05:42 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:43:23.598 13:05:42 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:864ae4a3-cd96-439b-8ef2-6dab4d992115 --hostid=864ae4a3-cd96-439b-8ef2-6dab4d992115 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:43:23.598 13:05:42 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:43:23.598 13:05:42 -- common/autotest_common.sh@1177 -- # local i=0 00:43:23.598 13:05:42 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:43:23.598 13:05:42 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:43:23.598 13:05:42 -- common/autotest_common.sh@1184 -- # sleep 2 00:43:25.501 13:05:44 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:43:25.501 13:05:44 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:43:25.501 13:05:44 -- common/autotest_common.sh@1186 -- # grep -c SPDK3 00:43:25.501 13:05:44 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:43:25.501 13:05:44 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:43:25.501 13:05:44 -- common/autotest_common.sh@1187 -- # return 0 00:43:25.501 13:05:44 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:43:25.501 13:05:44 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:864ae4a3-cd96-439b-8ef2-6dab4d992115 --hostid=864ae4a3-cd96-439b-8ef2-6dab4d992115 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:43:25.759 13:05:45 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:43:25.759 13:05:45 -- common/autotest_common.sh@1177 -- # local i=0 00:43:25.759 13:05:45 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:43:25.759 13:05:45 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:43:25.759 13:05:45 -- common/autotest_common.sh@1184 -- # sleep 2 00:43:27.726 13:05:47 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:43:27.726 13:05:47 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:43:27.726 13:05:47 -- common/autotest_common.sh@1186 -- # grep -c SPDK4 00:43:27.726 13:05:47 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:43:27.726 13:05:47 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:43:27.726 13:05:47 -- common/autotest_common.sh@1187 -- # return 0 00:43:27.726 13:05:47 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:43:27.726 13:05:47 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:864ae4a3-cd96-439b-8ef2-6dab4d992115 --hostid=864ae4a3-cd96-439b-8ef2-6dab4d992115 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:43:27.985 13:05:47 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:43:27.985 13:05:47 -- common/autotest_common.sh@1177 -- # local i=0 00:43:27.985 13:05:47 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:43:27.985 13:05:47 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:43:27.985 13:05:47 -- common/autotest_common.sh@1184 -- # sleep 2 00:43:29.887 13:05:49 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:43:29.887 13:05:49 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:43:29.887 13:05:49 -- common/autotest_common.sh@1186 -- # grep -c SPDK5 00:43:29.887 13:05:49 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:43:29.887 13:05:49 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:43:29.887 13:05:49 -- common/autotest_common.sh@1187 -- # return 0 00:43:29.887 13:05:49 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:43:29.887 13:05:49 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:864ae4a3-cd96-439b-8ef2-6dab4d992115 --hostid=864ae4a3-cd96-439b-8ef2-6dab4d992115 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:43:30.146 13:05:49 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:43:30.146 13:05:49 -- common/autotest_common.sh@1177 -- # local i=0 00:43:30.146 13:05:49 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:43:30.146 13:05:49 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:43:30.146 13:05:49 -- common/autotest_common.sh@1184 -- # sleep 2 00:43:32.677 13:05:51 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:43:32.677 13:05:51 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:43:32.677 13:05:51 -- common/autotest_common.sh@1186 -- # grep -c SPDK6 00:43:32.677 13:05:51 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:43:32.677 13:05:51 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:43:32.677 13:05:51 -- common/autotest_common.sh@1187 -- # return 0 00:43:32.677 13:05:51 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:43:32.677 13:05:51 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:864ae4a3-cd96-439b-8ef2-6dab4d992115 --hostid=864ae4a3-cd96-439b-8ef2-6dab4d992115 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:43:32.677 13:05:51 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:43:32.677 13:05:51 -- common/autotest_common.sh@1177 -- # local i=0 00:43:32.677 13:05:51 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:43:32.677 13:05:51 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:43:32.677 13:05:51 -- common/autotest_common.sh@1184 -- # sleep 2 00:43:34.578 13:05:53 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:43:34.578 13:05:53 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:43:34.578 13:05:53 -- common/autotest_common.sh@1186 -- # grep -c SPDK7 00:43:34.578 13:05:53 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:43:34.578 13:05:53 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:43:34.578 13:05:53 -- common/autotest_common.sh@1187 -- # return 0 00:43:34.578 13:05:53 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:43:34.578 13:05:53 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:864ae4a3-cd96-439b-8ef2-6dab4d992115 --hostid=864ae4a3-cd96-439b-8ef2-6dab4d992115 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:43:34.578 13:05:53 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:43:34.578 13:05:53 -- common/autotest_common.sh@1177 -- # local i=0 00:43:34.578 13:05:53 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:43:34.578 13:05:53 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:43:34.578 13:05:53 -- common/autotest_common.sh@1184 -- # sleep 2 00:43:36.493 13:05:55 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:43:36.494 13:05:55 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:43:36.494 13:05:55 -- common/autotest_common.sh@1186 -- # grep -c SPDK8 00:43:36.494 13:05:55 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:43:36.494 13:05:55 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:43:36.494 13:05:55 -- common/autotest_common.sh@1187 -- # return 0 00:43:36.494 13:05:55 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:43:36.494 13:05:55 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:864ae4a3-cd96-439b-8ef2-6dab4d992115 --hostid=864ae4a3-cd96-439b-8ef2-6dab4d992115 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:43:36.752 13:05:56 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:43:36.752 13:05:56 -- common/autotest_common.sh@1177 -- # local i=0 00:43:36.752 13:05:56 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:43:36.752 13:05:56 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:43:36.752 13:05:56 -- common/autotest_common.sh@1184 -- # sleep 2 00:43:38.656 13:05:58 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:43:38.656 13:05:58 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:43:38.656 13:05:58 -- common/autotest_common.sh@1186 -- # grep -c SPDK9 00:43:38.914 13:05:58 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:43:38.914 13:05:58 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:43:38.914 13:05:58 -- common/autotest_common.sh@1187 -- # return 0 00:43:38.914 13:05:58 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:43:38.914 13:05:58 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:864ae4a3-cd96-439b-8ef2-6dab4d992115 --hostid=864ae4a3-cd96-439b-8ef2-6dab4d992115 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:43:38.914 13:05:58 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:43:38.914 13:05:58 -- common/autotest_common.sh@1177 -- # local i=0 00:43:38.914 13:05:58 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:43:38.914 13:05:58 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:43:38.914 13:05:58 -- common/autotest_common.sh@1184 -- # sleep 2 00:43:41.454 13:06:00 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:43:41.454 13:06:00 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:43:41.454 13:06:00 -- common/autotest_common.sh@1186 -- # grep -c SPDK10 00:43:41.454 13:06:00 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:43:41.454 13:06:00 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:43:41.454 13:06:00 -- common/autotest_common.sh@1187 -- # return 0 00:43:41.454 13:06:00 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:43:41.454 13:06:00 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:864ae4a3-cd96-439b-8ef2-6dab4d992115 --hostid=864ae4a3-cd96-439b-8ef2-6dab4d992115 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:43:41.454 13:06:00 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:43:41.454 13:06:00 -- common/autotest_common.sh@1177 -- # local i=0 00:43:41.454 13:06:00 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:43:41.454 13:06:00 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:43:41.454 13:06:00 -- common/autotest_common.sh@1184 -- # sleep 2 00:43:43.357 13:06:02 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:43:43.357 13:06:02 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:43:43.357 13:06:02 -- common/autotest_common.sh@1186 -- # grep -c SPDK11 00:43:43.357 13:06:02 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:43:43.357 13:06:02 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:43:43.357 13:06:02 -- common/autotest_common.sh@1187 -- # return 0 00:43:43.357 13:06:02 -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:43:43.357 [global] 00:43:43.357 thread=1 00:43:43.357 invalidate=1 00:43:43.357 rw=read 00:43:43.357 time_based=1 00:43:43.357 runtime=10 00:43:43.357 ioengine=libaio 00:43:43.357 direct=1 00:43:43.357 bs=262144 00:43:43.357 iodepth=64 00:43:43.357 norandommap=1 00:43:43.357 numjobs=1 00:43:43.357 00:43:43.357 [job0] 00:43:43.357 filename=/dev/nvme0n1 00:43:43.357 [job1] 00:43:43.357 filename=/dev/nvme10n1 00:43:43.357 [job2] 00:43:43.357 filename=/dev/nvme1n1 00:43:43.357 [job3] 00:43:43.357 filename=/dev/nvme2n1 00:43:43.357 [job4] 00:43:43.357 filename=/dev/nvme3n1 00:43:43.357 [job5] 00:43:43.357 filename=/dev/nvme4n1 00:43:43.357 [job6] 00:43:43.357 filename=/dev/nvme5n1 00:43:43.357 [job7] 00:43:43.357 filename=/dev/nvme6n1 00:43:43.357 [job8] 00:43:43.357 filename=/dev/nvme7n1 00:43:43.357 [job9] 00:43:43.357 filename=/dev/nvme8n1 00:43:43.357 [job10] 00:43:43.357 filename=/dev/nvme9n1 00:43:43.357 Could not set queue depth (nvme0n1) 00:43:43.357 Could not set queue depth (nvme10n1) 00:43:43.357 Could not set queue depth (nvme1n1) 00:43:43.357 Could not set queue depth (nvme2n1) 00:43:43.357 Could not set queue depth (nvme3n1) 00:43:43.357 Could not set queue depth (nvme4n1) 00:43:43.357 Could not set queue depth (nvme5n1) 00:43:43.357 Could not set queue depth (nvme6n1) 00:43:43.357 Could not set queue depth (nvme7n1) 00:43:43.357 Could not set queue depth (nvme8n1) 00:43:43.357 Could not set queue depth (nvme9n1) 00:43:43.616 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:43:43.616 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:43:43.616 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:43:43.616 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:43:43.616 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:43:43.616 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:43:43.616 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:43:43.616 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:43:43.616 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:43:43.616 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:43:43.616 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:43:43.616 fio-3.35 00:43:43.616 Starting 11 threads 00:43:55.858 00:43:55.858 job0: (groupid=0, jobs=1): err= 0: pid=90419: Mon Jul 22 13:06:13 2024 00:43:55.858 read: IOPS=534, BW=134MiB/s (140MB/s)(1351MiB/10108msec) 00:43:55.858 slat (usec): min=17, max=74478, avg=1823.22, stdev=6404.67 00:43:55.858 clat (msec): min=24, max=257, avg=117.61, stdev=31.86 00:43:55.858 lat (msec): min=24, max=257, avg=119.43, stdev=32.79 00:43:55.858 clat percentiles (msec): 00:43:55.858 | 1.00th=[ 49], 5.00th=[ 59], 10.00th=[ 66], 20.00th=[ 92], 00:43:55.858 | 30.00th=[ 111], 40.00th=[ 116], 50.00th=[ 122], 60.00th=[ 125], 00:43:55.858 | 70.00th=[ 133], 80.00th=[ 146], 90.00th=[ 153], 95.00th=[ 159], 00:43:55.858 | 99.00th=[ 184], 99.50th=[ 224], 99.90th=[ 257], 99.95th=[ 257], 00:43:55.858 | 99.99th=[ 257] 00:43:55.858 bw ( KiB/s): min=97792, max=260598, per=6.42%, avg=136643.05, stdev=38535.16, samples=20 00:43:55.858 iops : min= 382, max= 1017, avg=533.55, stdev=150.38, samples=20 00:43:55.858 lat (msec) : 50=1.09%, 100=22.21%, 250=76.52%, 500=0.19% 00:43:55.858 cpu : usr=0.19%, sys=1.74%, ctx=1103, majf=0, minf=4097 00:43:55.858 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:43:55.858 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:55.858 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:43:55.858 issued rwts: total=5404,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:55.858 latency : target=0, window=0, percentile=100.00%, depth=64 00:43:55.858 job1: (groupid=0, jobs=1): err= 0: pid=90420: Mon Jul 22 13:06:13 2024 00:43:55.858 read: IOPS=496, BW=124MiB/s (130MB/s)(1257MiB/10115msec) 00:43:55.858 slat (usec): min=21, max=80530, avg=1993.39, stdev=6440.51 00:43:55.858 clat (msec): min=25, max=298, avg=126.56, stdev=25.77 00:43:55.858 lat (msec): min=26, max=298, avg=128.55, stdev=26.72 00:43:55.858 clat percentiles (msec): 00:43:55.858 | 1.00th=[ 61], 5.00th=[ 86], 10.00th=[ 93], 20.00th=[ 107], 00:43:55.858 | 30.00th=[ 117], 40.00th=[ 123], 50.00th=[ 127], 60.00th=[ 131], 00:43:55.858 | 70.00th=[ 138], 80.00th=[ 150], 90.00th=[ 159], 95.00th=[ 165], 00:43:55.858 | 99.00th=[ 190], 99.50th=[ 190], 99.90th=[ 226], 99.95th=[ 234], 00:43:55.858 | 99.99th=[ 300] 00:43:55.858 bw ( KiB/s): min=96768, max=179200, per=5.97%, avg=127013.55, stdev=22816.19, samples=20 00:43:55.858 iops : min= 378, max= 700, avg=496.05, stdev=89.09, samples=20 00:43:55.858 lat (msec) : 50=0.97%, 100=13.96%, 250=85.02%, 500=0.04% 00:43:55.858 cpu : usr=0.22%, sys=1.62%, ctx=1062, majf=0, minf=4097 00:43:55.858 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:43:55.858 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:55.858 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:43:55.858 issued rwts: total=5027,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:55.858 latency : target=0, window=0, percentile=100.00%, depth=64 00:43:55.858 job2: (groupid=0, jobs=1): err= 0: pid=90421: Mon Jul 22 13:06:13 2024 00:43:55.858 read: IOPS=735, BW=184MiB/s (193MB/s)(1855MiB/10088msec) 00:43:55.858 slat (usec): min=17, max=72568, avg=1316.03, stdev=4792.92 00:43:55.858 clat (msec): min=15, max=203, avg=85.49, stdev=22.29 00:43:55.858 lat (msec): min=15, max=204, avg=86.81, stdev=22.98 00:43:55.858 clat percentiles (msec): 00:43:55.858 | 1.00th=[ 43], 5.00th=[ 53], 10.00th=[ 56], 20.00th=[ 63], 00:43:55.858 | 30.00th=[ 73], 40.00th=[ 84], 50.00th=[ 87], 60.00th=[ 90], 00:43:55.858 | 70.00th=[ 95], 80.00th=[ 103], 90.00th=[ 117], 95.00th=[ 125], 00:43:55.858 | 99.00th=[ 136], 99.50th=[ 142], 99.90th=[ 178], 99.95th=[ 178], 00:43:55.858 | 99.99th=[ 205] 00:43:55.858 bw ( KiB/s): min=128766, max=282112, per=8.84%, avg=188199.35, stdev=44165.09, samples=20 00:43:55.858 iops : min= 502, max= 1102, avg=735.00, stdev=172.64, samples=20 00:43:55.858 lat (msec) : 20=0.08%, 50=3.01%, 100=75.19%, 250=21.73% 00:43:55.858 cpu : usr=0.29%, sys=2.28%, ctx=1499, majf=0, minf=4097 00:43:55.858 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:43:55.858 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:55.858 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:43:55.858 issued rwts: total=7420,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:55.858 latency : target=0, window=0, percentile=100.00%, depth=64 00:43:55.858 job3: (groupid=0, jobs=1): err= 0: pid=90422: Mon Jul 22 13:06:13 2024 00:43:55.858 read: IOPS=892, BW=223MiB/s (234MB/s)(2252MiB/10089msec) 00:43:55.858 slat (usec): min=15, max=49805, avg=1100.49, stdev=3891.03 00:43:55.858 clat (msec): min=33, max=207, avg=70.46, stdev=24.56 00:43:55.858 lat (msec): min=33, max=208, avg=71.56, stdev=25.09 00:43:55.858 clat percentiles (msec): 00:43:55.858 | 1.00th=[ 44], 5.00th=[ 49], 10.00th=[ 52], 20.00th=[ 55], 00:43:55.858 | 30.00th=[ 57], 40.00th=[ 59], 50.00th=[ 62], 60.00th=[ 65], 00:43:55.858 | 70.00th=[ 68], 80.00th=[ 82], 90.00th=[ 118], 95.00th=[ 128], 00:43:55.858 | 99.00th=[ 140], 99.50th=[ 144], 99.90th=[ 197], 99.95th=[ 197], 00:43:55.858 | 99.99th=[ 209] 00:43:55.858 bw ( KiB/s): min=129024, max=288256, per=10.75%, avg=228800.15, stdev=63960.57, samples=20 00:43:55.858 iops : min= 504, max= 1126, avg=893.50, stdev=249.94, samples=20 00:43:55.858 lat (msec) : 50=7.09%, 100=77.86%, 250=15.04% 00:43:55.858 cpu : usr=0.37%, sys=2.80%, ctx=1551, majf=0, minf=4097 00:43:55.858 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:43:55.858 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:55.858 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:43:55.858 issued rwts: total=9007,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:55.858 latency : target=0, window=0, percentile=100.00%, depth=64 00:43:55.858 job4: (groupid=0, jobs=1): err= 0: pid=90423: Mon Jul 22 13:06:13 2024 00:43:55.858 read: IOPS=1014, BW=254MiB/s (266MB/s)(2541MiB/10017msec) 00:43:55.858 slat (usec): min=20, max=60515, avg=965.80, stdev=3946.14 00:43:55.858 clat (msec): min=10, max=205, avg=61.98, stdev=39.36 00:43:55.858 lat (msec): min=10, max=213, avg=62.95, stdev=40.09 00:43:55.858 clat percentiles (msec): 00:43:55.858 | 1.00th=[ 18], 5.00th=[ 22], 10.00th=[ 25], 20.00th=[ 29], 00:43:55.858 | 30.00th=[ 32], 40.00th=[ 36], 50.00th=[ 52], 60.00th=[ 61], 00:43:55.858 | 70.00th=[ 71], 80.00th=[ 105], 90.00th=[ 125], 95.00th=[ 138], 00:43:55.858 | 99.00th=[ 157], 99.50th=[ 161], 99.90th=[ 171], 99.95th=[ 171], 00:43:55.858 | 99.99th=[ 203] 00:43:55.858 bw ( KiB/s): min=102912, max=560640, per=12.15%, avg=258575.40, stdev=164920.79, samples=20 00:43:55.858 iops : min= 402, max= 2190, avg=1009.90, stdev=644.31, samples=20 00:43:55.858 lat (msec) : 20=2.73%, 50=46.53%, 100=29.37%, 250=21.38% 00:43:55.858 cpu : usr=0.45%, sys=2.86%, ctx=1733, majf=0, minf=4097 00:43:55.858 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:43:55.858 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:55.858 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:43:55.858 issued rwts: total=10165,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:55.858 latency : target=0, window=0, percentile=100.00%, depth=64 00:43:55.858 job5: (groupid=0, jobs=1): err= 0: pid=90424: Mon Jul 22 13:06:13 2024 00:43:55.858 read: IOPS=559, BW=140MiB/s (147MB/s)(1415MiB/10105msec) 00:43:55.858 slat (usec): min=18, max=95157, avg=1739.01, stdev=6110.78 00:43:55.858 clat (msec): min=32, max=260, avg=112.41, stdev=28.13 00:43:55.858 lat (msec): min=33, max=260, avg=114.15, stdev=29.03 00:43:55.858 clat percentiles (msec): 00:43:55.858 | 1.00th=[ 72], 5.00th=[ 80], 10.00th=[ 84], 20.00th=[ 87], 00:43:55.858 | 30.00th=[ 92], 40.00th=[ 96], 50.00th=[ 104], 60.00th=[ 117], 00:43:55.858 | 70.00th=[ 128], 80.00th=[ 146], 90.00th=[ 155], 95.00th=[ 159], 00:43:55.858 | 99.00th=[ 171], 99.50th=[ 184], 99.90th=[ 226], 99.95th=[ 226], 00:43:55.858 | 99.99th=[ 262] 00:43:55.858 bw ( KiB/s): min=96256, max=185996, per=6.72%, avg=143155.05, stdev=31046.06, samples=20 00:43:55.859 iops : min= 376, max= 726, avg=558.90, stdev=121.34, samples=20 00:43:55.859 lat (msec) : 50=0.09%, 100=47.21%, 250=52.67%, 500=0.04% 00:43:55.859 cpu : usr=0.17%, sys=1.84%, ctx=1219, majf=0, minf=4097 00:43:55.859 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:43:55.859 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:55.859 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:43:55.859 issued rwts: total=5658,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:55.859 latency : target=0, window=0, percentile=100.00%, depth=64 00:43:55.859 job6: (groupid=0, jobs=1): err= 0: pid=90425: Mon Jul 22 13:06:13 2024 00:43:55.859 read: IOPS=557, BW=139MiB/s (146MB/s)(1409MiB/10107msec) 00:43:55.859 slat (usec): min=19, max=68108, avg=1757.74, stdev=5929.05 00:43:55.859 clat (msec): min=16, max=265, avg=112.86, stdev=37.77 00:43:55.859 lat (msec): min=16, max=265, avg=114.62, stdev=38.66 00:43:55.859 clat percentiles (msec): 00:43:55.859 | 1.00th=[ 40], 5.00th=[ 53], 10.00th=[ 57], 20.00th=[ 65], 00:43:55.859 | 30.00th=[ 105], 40.00th=[ 118], 50.00th=[ 122], 60.00th=[ 127], 00:43:55.859 | 70.00th=[ 133], 80.00th=[ 146], 90.00th=[ 157], 95.00th=[ 161], 00:43:55.859 | 99.00th=[ 182], 99.50th=[ 207], 99.90th=[ 266], 99.95th=[ 266], 00:43:55.859 | 99.99th=[ 266] 00:43:55.859 bw ( KiB/s): min=95232, max=266752, per=6.70%, avg=142522.60, stdev=54372.28, samples=20 00:43:55.859 iops : min= 372, max= 1042, avg=556.55, stdev=212.33, samples=20 00:43:55.859 lat (msec) : 20=0.43%, 50=3.28%, 100=24.65%, 250=71.51%, 500=0.12% 00:43:55.859 cpu : usr=0.18%, sys=1.81%, ctx=1120, majf=0, minf=4097 00:43:55.859 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:43:55.859 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:55.859 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:43:55.859 issued rwts: total=5634,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:55.859 latency : target=0, window=0, percentile=100.00%, depth=64 00:43:55.859 job7: (groupid=0, jobs=1): err= 0: pid=90426: Mon Jul 22 13:06:13 2024 00:43:55.859 read: IOPS=802, BW=201MiB/s (210MB/s)(2025MiB/10091msec) 00:43:55.859 slat (usec): min=20, max=68549, avg=1212.54, stdev=4341.51 00:43:55.859 clat (msec): min=26, max=216, avg=78.35, stdev=23.53 00:43:55.859 lat (msec): min=27, max=216, avg=79.57, stdev=24.13 00:43:55.859 clat percentiles (msec): 00:43:55.859 | 1.00th=[ 44], 5.00th=[ 50], 10.00th=[ 54], 20.00th=[ 58], 00:43:55.859 | 30.00th=[ 62], 40.00th=[ 66], 50.00th=[ 74], 60.00th=[ 84], 00:43:55.859 | 70.00th=[ 90], 80.00th=[ 96], 90.00th=[ 115], 95.00th=[ 123], 00:43:55.859 | 99.00th=[ 138], 99.50th=[ 146], 99.90th=[ 180], 99.95th=[ 205], 00:43:55.859 | 99.99th=[ 218] 00:43:55.859 bw ( KiB/s): min=125440, max=283592, per=9.66%, avg=205647.90, stdev=54622.06, samples=20 00:43:55.859 iops : min= 490, max= 1107, avg=803.20, stdev=213.35, samples=20 00:43:55.859 lat (msec) : 50=5.83%, 100=77.74%, 250=16.44% 00:43:55.859 cpu : usr=0.30%, sys=2.40%, ctx=1685, majf=0, minf=4097 00:43:55.859 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:43:55.859 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:55.859 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:43:55.859 issued rwts: total=8098,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:55.859 latency : target=0, window=0, percentile=100.00%, depth=64 00:43:55.859 job8: (groupid=0, jobs=1): err= 0: pid=90427: Mon Jul 22 13:06:13 2024 00:43:55.859 read: IOPS=792, BW=198MiB/s (208MB/s)(2004MiB/10112msec) 00:43:55.859 slat (usec): min=20, max=55271, avg=1224.19, stdev=4285.07 00:43:55.859 clat (msec): min=14, max=251, avg=79.36, stdev=32.26 00:43:55.859 lat (msec): min=15, max=251, avg=80.58, stdev=32.89 00:43:55.859 clat percentiles (msec): 00:43:55.859 | 1.00th=[ 42], 5.00th=[ 51], 10.00th=[ 54], 20.00th=[ 58], 00:43:55.859 | 30.00th=[ 61], 40.00th=[ 64], 50.00th=[ 67], 60.00th=[ 74], 00:43:55.859 | 70.00th=[ 86], 80.00th=[ 93], 90.00th=[ 144], 95.00th=[ 159], 00:43:55.859 | 99.00th=[ 169], 99.50th=[ 192], 99.90th=[ 232], 99.95th=[ 232], 00:43:55.859 | 99.99th=[ 253] 00:43:55.859 bw ( KiB/s): min=101386, max=271872, per=9.56%, avg=203424.70, stdev=64916.46, samples=20 00:43:55.859 iops : min= 396, max= 1062, avg=794.50, stdev=253.51, samples=20 00:43:55.859 lat (msec) : 20=0.26%, 50=3.82%, 100=82.21%, 250=13.67%, 500=0.04% 00:43:55.859 cpu : usr=0.23%, sys=2.48%, ctx=1675, majf=0, minf=4097 00:43:55.859 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:43:55.859 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:55.859 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:43:55.859 issued rwts: total=8016,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:55.859 latency : target=0, window=0, percentile=100.00%, depth=64 00:43:55.859 job9: (groupid=0, jobs=1): err= 0: pid=90428: Mon Jul 22 13:06:13 2024 00:43:55.859 read: IOPS=1178, BW=295MiB/s (309MB/s)(2948MiB/10009msec) 00:43:55.859 slat (usec): min=21, max=42907, avg=842.54, stdev=3063.60 00:43:55.859 clat (msec): min=7, max=144, avg=53.39, stdev=21.36 00:43:55.859 lat (msec): min=8, max=170, avg=54.23, stdev=21.77 00:43:55.859 clat percentiles (msec): 00:43:55.859 | 1.00th=[ 20], 5.00th=[ 24], 10.00th=[ 27], 20.00th=[ 32], 00:43:55.859 | 30.00th=[ 36], 40.00th=[ 53], 50.00th=[ 57], 60.00th=[ 61], 00:43:55.859 | 70.00th=[ 64], 80.00th=[ 67], 90.00th=[ 74], 95.00th=[ 89], 00:43:55.859 | 99.00th=[ 125], 99.50th=[ 128], 99.90th=[ 136], 99.95th=[ 138], 00:43:55.859 | 99.99th=[ 142] 00:43:55.859 bw ( KiB/s): min=138752, max=548352, per=13.54%, avg=288197.95, stdev=112976.08, samples=19 00:43:55.859 iops : min= 542, max= 2142, avg=1125.68, stdev=441.32, samples=19 00:43:55.859 lat (msec) : 10=0.07%, 20=1.47%, 50=36.64%, 100=58.88%, 250=2.95% 00:43:55.859 cpu : usr=0.52%, sys=3.73%, ctx=2424, majf=0, minf=4097 00:43:55.859 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:43:55.859 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:55.859 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:43:55.859 issued rwts: total=11792,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:55.859 latency : target=0, window=0, percentile=100.00%, depth=64 00:43:55.859 job10: (groupid=0, jobs=1): err= 0: pid=90429: Mon Jul 22 13:06:13 2024 00:43:55.859 read: IOPS=781, BW=195MiB/s (205MB/s)(1972MiB/10094msec) 00:43:55.859 slat (usec): min=18, max=61528, avg=1249.36, stdev=4400.73 00:43:55.859 clat (msec): min=22, max=220, avg=80.52, stdev=25.80 00:43:55.859 lat (msec): min=23, max=239, avg=81.77, stdev=26.43 00:43:55.859 clat percentiles (msec): 00:43:55.859 | 1.00th=[ 46], 5.00th=[ 53], 10.00th=[ 55], 20.00th=[ 59], 00:43:55.859 | 30.00th=[ 62], 40.00th=[ 66], 50.00th=[ 71], 60.00th=[ 86], 00:43:55.859 | 70.00th=[ 92], 80.00th=[ 101], 90.00th=[ 122], 95.00th=[ 129], 00:43:55.859 | 99.00th=[ 146], 99.50th=[ 174], 99.90th=[ 197], 99.95th=[ 197], 00:43:55.859 | 99.99th=[ 222] 00:43:55.859 bw ( KiB/s): min=125952, max=271329, per=9.41%, avg=200247.80, stdev=57239.31, samples=20 00:43:55.859 iops : min= 492, max= 1059, avg=782.10, stdev=223.49, samples=20 00:43:55.859 lat (msec) : 50=2.97%, 100=76.99%, 250=20.04% 00:43:55.859 cpu : usr=0.30%, sys=2.47%, ctx=1637, majf=0, minf=4097 00:43:55.859 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:43:55.859 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:55.859 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:43:55.859 issued rwts: total=7888,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:55.859 latency : target=0, window=0, percentile=100.00%, depth=64 00:43:55.859 00:43:55.859 Run status group 0 (all jobs): 00:43:55.859 READ: bw=2079MiB/s (2180MB/s), 124MiB/s-295MiB/s (130MB/s-309MB/s), io=20.5GiB (22.0GB), run=10009-10115msec 00:43:55.859 00:43:55.859 Disk stats (read/write): 00:43:55.859 nvme0n1: ios=10722/0, merge=0/0, ticks=1238790/0, in_queue=1238790, util=97.54% 00:43:55.859 nvme10n1: ios=9933/0, merge=0/0, ticks=1237987/0, in_queue=1237987, util=97.71% 00:43:55.859 nvme1n1: ios=14717/0, merge=0/0, ticks=1235702/0, in_queue=1235702, util=97.78% 00:43:55.859 nvme2n1: ios=17892/0, merge=0/0, ticks=1236141/0, in_queue=1236141, util=97.96% 00:43:55.859 nvme3n1: ios=19214/0, merge=0/0, ticks=1207246/0, in_queue=1207246, util=97.91% 00:43:55.859 nvme4n1: ios=11194/0, merge=0/0, ticks=1239667/0, in_queue=1239667, util=98.08% 00:43:55.859 nvme5n1: ios=11155/0, merge=0/0, ticks=1237621/0, in_queue=1237621, util=98.20% 00:43:55.859 nvme6n1: ios=16085/0, merge=0/0, ticks=1236602/0, in_queue=1236602, util=98.17% 00:43:55.859 nvme7n1: ios=15912/0, merge=0/0, ticks=1231642/0, in_queue=1231642, util=98.44% 00:43:55.859 nvme8n1: ios=22510/0, merge=0/0, ticks=1204910/0, in_queue=1204910, util=98.75% 00:43:55.859 nvme9n1: ios=15659/0, merge=0/0, ticks=1234185/0, in_queue=1234185, util=98.74% 00:43:55.859 13:06:13 -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:43:55.859 [global] 00:43:55.859 thread=1 00:43:55.859 invalidate=1 00:43:55.859 rw=randwrite 00:43:55.859 time_based=1 00:43:55.859 runtime=10 00:43:55.859 ioengine=libaio 00:43:55.859 direct=1 00:43:55.859 bs=262144 00:43:55.859 iodepth=64 00:43:55.859 norandommap=1 00:43:55.859 numjobs=1 00:43:55.859 00:43:55.859 [job0] 00:43:55.859 filename=/dev/nvme0n1 00:43:55.859 [job1] 00:43:55.859 filename=/dev/nvme10n1 00:43:55.859 [job2] 00:43:55.859 filename=/dev/nvme1n1 00:43:55.859 [job3] 00:43:55.859 filename=/dev/nvme2n1 00:43:55.859 [job4] 00:43:55.859 filename=/dev/nvme3n1 00:43:55.859 [job5] 00:43:55.859 filename=/dev/nvme4n1 00:43:55.859 [job6] 00:43:55.859 filename=/dev/nvme5n1 00:43:55.859 [job7] 00:43:55.859 filename=/dev/nvme6n1 00:43:55.859 [job8] 00:43:55.859 filename=/dev/nvme7n1 00:43:55.859 [job9] 00:43:55.859 filename=/dev/nvme8n1 00:43:55.859 [job10] 00:43:55.860 filename=/dev/nvme9n1 00:43:55.860 Could not set queue depth (nvme0n1) 00:43:55.860 Could not set queue depth (nvme10n1) 00:43:55.860 Could not set queue depth (nvme1n1) 00:43:55.860 Could not set queue depth (nvme2n1) 00:43:55.860 Could not set queue depth (nvme3n1) 00:43:55.860 Could not set queue depth (nvme4n1) 00:43:55.860 Could not set queue depth (nvme5n1) 00:43:55.860 Could not set queue depth (nvme6n1) 00:43:55.860 Could not set queue depth (nvme7n1) 00:43:55.860 Could not set queue depth (nvme8n1) 00:43:55.860 Could not set queue depth (nvme9n1) 00:43:55.860 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:43:55.860 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:43:55.860 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:43:55.860 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:43:55.860 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:43:55.860 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:43:55.860 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:43:55.860 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:43:55.860 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:43:55.860 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:43:55.860 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:43:55.860 fio-3.35 00:43:55.860 Starting 11 threads 00:44:05.842 00:44:05.842 job0: (groupid=0, jobs=1): err= 0: pid=90625: Mon Jul 22 13:06:23 2024 00:44:05.842 write: IOPS=1367, BW=342MiB/s (359MB/s)(3434MiB/10040msec); 0 zone resets 00:44:05.842 slat (usec): min=18, max=6425, avg=724.39, stdev=1215.06 00:44:05.842 clat (usec): min=6747, max=79015, avg=46036.96, stdev=6202.00 00:44:05.842 lat (usec): min=6792, max=81492, avg=46761.35, stdev=6263.62 00:44:05.842 clat percentiles (usec): 00:44:05.842 | 1.00th=[39584], 5.00th=[40633], 10.00th=[41157], 20.00th=[41681], 00:44:05.842 | 30.00th=[42206], 40.00th=[43254], 50.00th=[43779], 60.00th=[44827], 00:44:05.842 | 70.00th=[47973], 80.00th=[50594], 90.00th=[53740], 95.00th=[58459], 00:44:05.842 | 99.00th=[65799], 99.50th=[68682], 99.90th=[72877], 99.95th=[76022], 00:44:05.842 | 99.99th=[79168] 00:44:05.842 bw ( KiB/s): min=292864, max=385024, per=23.06%, avg=349977.60, stdev=33860.72, samples=20 00:44:05.842 iops : min= 1144, max= 1504, avg=1367.10, stdev=132.27, samples=20 00:44:05.842 lat (msec) : 10=0.09%, 20=0.12%, 50=78.19%, 100=21.60% 00:44:05.842 cpu : usr=2.21%, sys=2.93%, ctx=17856, majf=0, minf=1 00:44:05.842 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:44:05.842 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:05.842 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:44:05.842 issued rwts: total=0,13734,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:05.842 latency : target=0, window=0, percentile=100.00%, depth=64 00:44:05.842 job1: (groupid=0, jobs=1): err= 0: pid=90626: Mon Jul 22 13:06:23 2024 00:44:05.842 write: IOPS=423, BW=106MiB/s (111MB/s)(1073MiB/10133msec); 0 zone resets 00:44:05.842 slat (usec): min=20, max=37339, avg=2292.52, stdev=4171.31 00:44:05.842 clat (msec): min=10, max=281, avg=148.71, stdev=36.39 00:44:05.842 lat (msec): min=10, max=281, avg=151.00, stdev=36.77 00:44:05.842 clat percentiles (msec): 00:44:05.842 | 1.00th=[ 40], 5.00th=[ 79], 10.00th=[ 83], 20.00th=[ 138], 00:44:05.842 | 30.00th=[ 153], 40.00th=[ 157], 50.00th=[ 161], 60.00th=[ 165], 00:44:05.842 | 70.00th=[ 169], 80.00th=[ 176], 90.00th=[ 178], 95.00th=[ 180], 00:44:05.842 | 99.00th=[ 186], 99.50th=[ 234], 99.90th=[ 271], 99.95th=[ 271], 00:44:05.842 | 99.99th=[ 284] 00:44:05.842 bw ( KiB/s): min=92160, max=190464, per=7.14%, avg=108288.00, stdev=28435.48, samples=20 00:44:05.842 iops : min= 360, max= 744, avg=423.00, stdev=111.08, samples=20 00:44:05.842 lat (msec) : 20=0.28%, 50=1.09%, 100=15.96%, 250=82.34%, 500=0.33% 00:44:05.842 cpu : usr=0.74%, sys=1.18%, ctx=4295, majf=0, minf=1 00:44:05.842 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:44:05.842 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:05.842 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:44:05.842 issued rwts: total=0,4293,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:05.842 latency : target=0, window=0, percentile=100.00%, depth=64 00:44:05.842 job2: (groupid=0, jobs=1): err= 0: pid=90639: Mon Jul 22 13:06:23 2024 00:44:05.842 write: IOPS=438, BW=110MiB/s (115MB/s)(1110MiB/10130msec); 0 zone resets 00:44:05.842 slat (usec): min=21, max=26988, avg=2248.42, stdev=3951.23 00:44:05.842 clat (msec): min=20, max=273, avg=143.68, stdev=28.87 00:44:05.842 lat (msec): min=20, max=273, avg=145.93, stdev=29.06 00:44:05.842 clat percentiles (msec): 00:44:05.842 | 1.00th=[ 82], 5.00th=[ 111], 10.00th=[ 112], 20.00th=[ 118], 00:44:05.842 | 30.00th=[ 120], 40.00th=[ 120], 50.00th=[ 148], 60.00th=[ 157], 00:44:05.842 | 70.00th=[ 167], 80.00th=[ 176], 90.00th=[ 178], 95.00th=[ 180], 00:44:05.842 | 99.00th=[ 184], 99.50th=[ 215], 99.90th=[ 266], 99.95th=[ 266], 00:44:05.842 | 99.99th=[ 275] 00:44:05.842 bw ( KiB/s): min=90112, max=139264, per=7.39%, avg=112076.80, stdev=20729.69, samples=20 00:44:05.842 iops : min= 352, max= 544, avg=437.80, stdev=80.98, samples=20 00:44:05.842 lat (msec) : 50=0.56%, 100=0.70%, 250=98.51%, 500=0.23% 00:44:05.842 cpu : usr=0.97%, sys=1.12%, ctx=5558, majf=0, minf=1 00:44:05.842 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:44:05.842 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:05.842 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:44:05.842 issued rwts: total=0,4441,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:05.842 latency : target=0, window=0, percentile=100.00%, depth=64 00:44:05.842 job3: (groupid=0, jobs=1): err= 0: pid=90640: Mon Jul 22 13:06:23 2024 00:44:05.842 write: IOPS=484, BW=121MiB/s (127MB/s)(1222MiB/10101msec); 0 zone resets 00:44:05.842 slat (usec): min=22, max=14525, avg=2040.04, stdev=3494.60 00:44:05.842 clat (msec): min=17, max=212, avg=130.14, stdev=15.18 00:44:05.842 lat (msec): min=17, max=212, avg=132.18, stdev=15.01 00:44:05.842 clat percentiles (msec): 00:44:05.842 | 1.00th=[ 109], 5.00th=[ 114], 10.00th=[ 115], 20.00th=[ 121], 00:44:05.842 | 30.00th=[ 123], 40.00th=[ 123], 50.00th=[ 126], 60.00th=[ 134], 00:44:05.842 | 70.00th=[ 138], 80.00th=[ 142], 90.00th=[ 146], 95.00th=[ 161], 00:44:05.842 | 99.00th=[ 169], 99.50th=[ 180], 99.90th=[ 207], 99.95th=[ 207], 00:44:05.842 | 99.99th=[ 213] 00:44:05.842 bw ( KiB/s): min=98304, max=137216, per=8.14%, avg=123545.60, stdev=11850.28, samples=20 00:44:05.842 iops : min= 384, max= 536, avg=482.60, stdev=46.29, samples=20 00:44:05.842 lat (msec) : 20=0.08%, 50=0.16%, 100=0.51%, 250=99.24% 00:44:05.842 cpu : usr=1.22%, sys=1.40%, ctx=6576, majf=0, minf=1 00:44:05.842 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:44:05.842 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:05.842 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:44:05.842 issued rwts: total=0,4889,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:05.842 latency : target=0, window=0, percentile=100.00%, depth=64 00:44:05.842 job4: (groupid=0, jobs=1): err= 0: pid=90641: Mon Jul 22 13:06:23 2024 00:44:05.842 write: IOPS=487, BW=122MiB/s (128MB/s)(1231MiB/10105msec); 0 zone resets 00:44:05.842 slat (usec): min=21, max=27934, avg=2020.03, stdev=3559.79 00:44:05.842 clat (msec): min=7, max=220, avg=129.24, stdev=26.47 00:44:05.842 lat (msec): min=7, max=220, avg=131.26, stdev=26.66 00:44:05.842 clat percentiles (msec): 00:44:05.842 | 1.00th=[ 55], 5.00th=[ 79], 10.00th=[ 83], 20.00th=[ 117], 00:44:05.842 | 30.00th=[ 126], 40.00th=[ 132], 50.00th=[ 136], 60.00th=[ 138], 00:44:05.842 | 70.00th=[ 144], 80.00th=[ 153], 90.00th=[ 157], 95.00th=[ 159], 00:44:05.842 | 99.00th=[ 161], 99.50th=[ 176], 99.90th=[ 213], 99.95th=[ 213], 00:44:05.842 | 99.99th=[ 222] 00:44:05.842 bw ( KiB/s): min=102400, max=201728, per=8.20%, avg=124467.20, stdev=26050.85, samples=20 00:44:05.842 iops : min= 400, max= 788, avg=486.20, stdev=101.76, samples=20 00:44:05.842 lat (msec) : 10=0.08%, 20=0.16%, 50=0.65%, 100=14.25%, 250=84.85% 00:44:05.842 cpu : usr=1.15%, sys=1.44%, ctx=3735, majf=0, minf=1 00:44:05.842 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:44:05.842 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:05.842 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:44:05.842 issued rwts: total=0,4925,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:05.842 latency : target=0, window=0, percentile=100.00%, depth=64 00:44:05.842 job5: (groupid=0, jobs=1): err= 0: pid=90647: Mon Jul 22 13:06:23 2024 00:44:05.843 write: IOPS=439, BW=110MiB/s (115MB/s)(1112MiB/10130msec); 0 zone resets 00:44:05.843 slat (usec): min=21, max=31489, avg=2242.93, stdev=3937.77 00:44:05.843 clat (msec): min=3, max=279, avg=143.45, stdev=28.54 00:44:05.843 lat (msec): min=3, max=280, avg=145.70, stdev=28.71 00:44:05.843 clat percentiles (msec): 00:44:05.843 | 1.00th=[ 107], 5.00th=[ 112], 10.00th=[ 112], 20.00th=[ 118], 00:44:05.843 | 30.00th=[ 120], 40.00th=[ 121], 50.00th=[ 144], 60.00th=[ 155], 00:44:05.843 | 70.00th=[ 167], 80.00th=[ 176], 90.00th=[ 178], 95.00th=[ 180], 00:44:05.843 | 99.00th=[ 184], 99.50th=[ 222], 99.90th=[ 271], 99.95th=[ 271], 00:44:05.843 | 99.99th=[ 279] 00:44:05.843 bw ( KiB/s): min=91976, max=139264, per=7.40%, avg=112246.80, stdev=20560.04, samples=20 00:44:05.843 iops : min= 359, max= 544, avg=438.45, stdev=80.33, samples=20 00:44:05.843 lat (msec) : 4=0.04%, 20=0.16%, 50=0.27%, 100=0.45%, 250=98.76% 00:44:05.843 lat (msec) : 500=0.31% 00:44:05.843 cpu : usr=0.85%, sys=1.39%, ctx=4330, majf=0, minf=1 00:44:05.843 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:44:05.843 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:05.843 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:44:05.843 issued rwts: total=0,4448,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:05.843 latency : target=0, window=0, percentile=100.00%, depth=64 00:44:05.843 job6: (groupid=0, jobs=1): err= 0: pid=90649: Mon Jul 22 13:06:23 2024 00:44:05.843 write: IOPS=454, BW=114MiB/s (119MB/s)(1147MiB/10099msec); 0 zone resets 00:44:05.843 slat (usec): min=20, max=16075, avg=2144.83, stdev=3753.53 00:44:05.843 clat (msec): min=19, max=215, avg=138.66, stdev=17.30 00:44:05.843 lat (msec): min=19, max=215, avg=140.81, stdev=17.22 00:44:05.843 clat percentiles (msec): 00:44:05.843 | 1.00th=[ 78], 5.00th=[ 113], 10.00th=[ 118], 20.00th=[ 127], 00:44:05.843 | 30.00th=[ 133], 40.00th=[ 136], 50.00th=[ 138], 60.00th=[ 142], 00:44:05.843 | 70.00th=[ 150], 80.00th=[ 155], 90.00th=[ 159], 95.00th=[ 161], 00:44:05.843 | 99.00th=[ 163], 99.50th=[ 171], 99.90th=[ 209], 99.95th=[ 209], 00:44:05.843 | 99.99th=[ 215] 00:44:05.843 bw ( KiB/s): min=100352, max=135680, per=7.63%, avg=115840.00, stdev=10567.40, samples=20 00:44:05.843 iops : min= 392, max= 530, avg=452.50, stdev=41.28, samples=20 00:44:05.843 lat (msec) : 20=0.09%, 50=0.17%, 100=1.16%, 250=98.58% 00:44:05.843 cpu : usr=0.88%, sys=0.91%, ctx=3780, majf=0, minf=1 00:44:05.843 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:44:05.843 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:05.843 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:44:05.843 issued rwts: total=0,4588,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:05.843 latency : target=0, window=0, percentile=100.00%, depth=64 00:44:05.843 job7: (groupid=0, jobs=1): err= 0: pid=90650: Mon Jul 22 13:06:23 2024 00:44:05.843 write: IOPS=484, BW=121MiB/s (127MB/s)(1223MiB/10106msec); 0 zone resets 00:44:05.843 slat (usec): min=19, max=13944, avg=2039.10, stdev=3503.57 00:44:05.843 clat (msec): min=6, max=220, avg=130.13, stdev=15.97 00:44:05.843 lat (msec): min=6, max=220, avg=132.17, stdev=15.83 00:44:05.843 clat percentiles (msec): 00:44:05.843 | 1.00th=[ 109], 5.00th=[ 114], 10.00th=[ 115], 20.00th=[ 121], 00:44:05.843 | 30.00th=[ 123], 40.00th=[ 123], 50.00th=[ 127], 60.00th=[ 134], 00:44:05.843 | 70.00th=[ 138], 80.00th=[ 142], 90.00th=[ 146], 95.00th=[ 161], 00:44:05.843 | 99.00th=[ 171], 99.50th=[ 180], 99.90th=[ 213], 99.95th=[ 213], 00:44:05.843 | 99.99th=[ 222] 00:44:05.843 bw ( KiB/s): min=101376, max=135168, per=8.15%, avg=123622.40, stdev=11106.70, samples=20 00:44:05.843 iops : min= 396, max= 528, avg=482.90, stdev=43.39, samples=20 00:44:05.843 lat (msec) : 10=0.12%, 20=0.10%, 50=0.22%, 100=0.41%, 250=99.14% 00:44:05.843 cpu : usr=1.07%, sys=1.23%, ctx=6384, majf=0, minf=1 00:44:05.843 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:44:05.843 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:05.843 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:44:05.843 issued rwts: total=0,4892,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:05.843 latency : target=0, window=0, percentile=100.00%, depth=64 00:44:05.843 job8: (groupid=0, jobs=1): err= 0: pid=90651: Mon Jul 22 13:06:23 2024 00:44:05.843 write: IOPS=483, BW=121MiB/s (127MB/s)(1221MiB/10103msec); 0 zone resets 00:44:05.843 slat (usec): min=19, max=14816, avg=2043.35, stdev=3522.60 00:44:05.843 clat (msec): min=6, max=219, avg=130.25, stdev=15.59 00:44:05.843 lat (msec): min=6, max=219, avg=132.29, stdev=15.43 00:44:05.843 clat percentiles (msec): 00:44:05.843 | 1.00th=[ 109], 5.00th=[ 114], 10.00th=[ 115], 20.00th=[ 121], 00:44:05.843 | 30.00th=[ 123], 40.00th=[ 123], 50.00th=[ 126], 60.00th=[ 134], 00:44:05.843 | 70.00th=[ 138], 80.00th=[ 142], 90.00th=[ 146], 95.00th=[ 161], 00:44:05.843 | 99.00th=[ 171], 99.50th=[ 180], 99.90th=[ 211], 99.95th=[ 211], 00:44:05.843 | 99.99th=[ 220] 00:44:05.843 bw ( KiB/s): min=97474, max=135680, per=8.13%, avg=123452.90, stdev=11653.64, samples=20 00:44:05.843 iops : min= 380, max= 530, avg=482.20, stdev=45.61, samples=20 00:44:05.843 lat (msec) : 10=0.04%, 50=0.25%, 100=0.41%, 250=99.30% 00:44:05.843 cpu : usr=0.77%, sys=1.01%, ctx=6714, majf=0, minf=1 00:44:05.843 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:44:05.843 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:05.843 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:44:05.843 issued rwts: total=0,4885,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:05.843 latency : target=0, window=0, percentile=100.00%, depth=64 00:44:05.843 job9: (groupid=0, jobs=1): err= 0: pid=90652: Mon Jul 22 13:06:23 2024 00:44:05.843 write: IOPS=449, BW=112MiB/s (118MB/s)(1135MiB/10092msec); 0 zone resets 00:44:05.843 slat (usec): min=17, max=94116, avg=2167.49, stdev=4014.05 00:44:05.843 clat (msec): min=51, max=229, avg=140.02, stdev=17.57 00:44:05.843 lat (msec): min=52, max=229, avg=142.19, stdev=17.46 00:44:05.843 clat percentiles (msec): 00:44:05.843 | 1.00th=[ 102], 5.00th=[ 115], 10.00th=[ 118], 20.00th=[ 127], 00:44:05.843 | 30.00th=[ 133], 40.00th=[ 136], 50.00th=[ 138], 60.00th=[ 142], 00:44:05.843 | 70.00th=[ 153], 80.00th=[ 159], 90.00th=[ 161], 95.00th=[ 165], 00:44:05.843 | 99.00th=[ 180], 99.50th=[ 190], 99.90th=[ 218], 99.95th=[ 230], 00:44:05.843 | 99.99th=[ 230] 00:44:05.843 bw ( KiB/s): min=79872, max=135680, per=7.55%, avg=114636.80, stdev=13545.13, samples=20 00:44:05.843 iops : min= 312, max= 530, avg=447.80, stdev=52.91, samples=20 00:44:05.843 lat (msec) : 100=0.97%, 250=99.03% 00:44:05.843 cpu : usr=0.82%, sys=1.39%, ctx=5447, majf=0, minf=1 00:44:05.843 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:44:05.843 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:05.843 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:44:05.843 issued rwts: total=0,4541,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:05.843 latency : target=0, window=0, percentile=100.00%, depth=64 00:44:05.843 job10: (groupid=0, jobs=1): err= 0: pid=90653: Mon Jul 22 13:06:23 2024 00:44:05.843 write: IOPS=437, BW=109MiB/s (115MB/s)(1109MiB/10129msec); 0 zone resets 00:44:05.843 slat (usec): min=19, max=27713, avg=2250.15, stdev=3949.01 00:44:05.843 clat (msec): min=26, max=278, avg=143.90, stdev=28.14 00:44:05.843 lat (msec): min=26, max=278, avg=146.15, stdev=28.31 00:44:05.843 clat percentiles (msec): 00:44:05.843 | 1.00th=[ 105], 5.00th=[ 112], 10.00th=[ 112], 20.00th=[ 118], 00:44:05.843 | 30.00th=[ 120], 40.00th=[ 120], 50.00th=[ 148], 60.00th=[ 155], 00:44:05.843 | 70.00th=[ 167], 80.00th=[ 176], 90.00th=[ 178], 95.00th=[ 180], 00:44:05.843 | 99.00th=[ 184], 99.50th=[ 220], 99.90th=[ 268], 99.95th=[ 268], 00:44:05.843 | 99.99th=[ 279] 00:44:05.843 bw ( KiB/s): min=92160, max=139264, per=7.37%, avg=111897.60, stdev=20628.66, samples=20 00:44:05.843 iops : min= 360, max= 544, avg=437.10, stdev=80.58, samples=20 00:44:05.843 lat (msec) : 50=0.27%, 100=0.63%, 250=98.89%, 500=0.20% 00:44:05.843 cpu : usr=0.97%, sys=1.39%, ctx=4957, majf=0, minf=1 00:44:05.843 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:44:05.843 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:05.843 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:44:05.843 issued rwts: total=0,4434,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:05.843 latency : target=0, window=0, percentile=100.00%, depth=64 00:44:05.843 00:44:05.843 Run status group 0 (all jobs): 00:44:05.843 WRITE: bw=1482MiB/s (1554MB/s), 106MiB/s-342MiB/s (111MB/s-359MB/s), io=14.7GiB (15.7GB), run=10040-10133msec 00:44:05.843 00:44:05.843 Disk stats (read/write): 00:44:05.843 nvme0n1: ios=49/27293, merge=0/0, ticks=27/1217772, in_queue=1217799, util=97.72% 00:44:05.843 nvme10n1: ios=49/8449, merge=0/0, ticks=55/1212782, in_queue=1212837, util=98.00% 00:44:05.843 nvme1n1: ios=38/8734, merge=0/0, ticks=49/1211068, in_queue=1211117, util=98.06% 00:44:05.843 nvme2n1: ios=13/9630, merge=0/0, ticks=13/1213472, in_queue=1213485, util=97.96% 00:44:05.843 nvme3n1: ios=0/9719, merge=0/0, ticks=0/1215204, in_queue=1215204, util=98.09% 00:44:05.843 nvme4n1: ios=0/8754, merge=0/0, ticks=0/1211856, in_queue=1211856, util=98.16% 00:44:05.843 nvme5n1: ios=0/9034, merge=0/0, ticks=0/1214437, in_queue=1214437, util=98.25% 00:44:05.843 nvme6n1: ios=0/9653, merge=0/0, ticks=0/1215284, in_queue=1215284, util=98.48% 00:44:05.843 nvme7n1: ios=0/9630, merge=0/0, ticks=0/1213520, in_queue=1213520, util=98.65% 00:44:05.843 nvme8n1: ios=0/8925, merge=0/0, ticks=0/1212779, in_queue=1212779, util=98.62% 00:44:05.843 nvme9n1: ios=0/8722, merge=0/0, ticks=0/1211375, in_queue=1211375, util=98.77% 00:44:05.843 13:06:23 -- target/multiconnection.sh@36 -- # sync 00:44:05.843 13:06:23 -- target/multiconnection.sh@37 -- # seq 1 11 00:44:05.843 13:06:23 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:44:05.843 13:06:23 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:44:05.843 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:44:05.843 13:06:24 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:44:05.843 13:06:24 -- common/autotest_common.sh@1198 -- # local i=0 00:44:05.843 13:06:24 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:44:05.843 13:06:24 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK1 00:44:05.843 13:06:24 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:44:05.843 13:06:24 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK1 00:44:05.843 13:06:24 -- common/autotest_common.sh@1210 -- # return 0 00:44:05.843 13:06:24 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:44:05.843 13:06:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:44:05.843 13:06:24 -- common/autotest_common.sh@10 -- # set +x 00:44:05.844 13:06:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:44:05.844 13:06:24 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:44:05.844 13:06:24 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:44:05.844 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:44:05.844 13:06:24 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:44:05.844 13:06:24 -- common/autotest_common.sh@1198 -- # local i=0 00:44:05.844 13:06:24 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK2 00:44:05.844 13:06:24 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:44:05.844 13:06:24 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK2 00:44:05.844 13:06:24 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:44:05.844 13:06:24 -- common/autotest_common.sh@1210 -- # return 0 00:44:05.844 13:06:24 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:44:05.844 13:06:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:44:05.844 13:06:24 -- common/autotest_common.sh@10 -- # set +x 00:44:05.844 13:06:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:44:05.844 13:06:24 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:44:05.844 13:06:24 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:44:05.844 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:44:05.844 13:06:24 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:44:05.844 13:06:24 -- common/autotest_common.sh@1198 -- # local i=0 00:44:05.844 13:06:24 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:44:05.844 13:06:24 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK3 00:44:05.844 13:06:24 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:44:05.844 13:06:24 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK3 00:44:05.844 13:06:24 -- common/autotest_common.sh@1210 -- # return 0 00:44:05.844 13:06:24 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:44:05.844 13:06:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:44:05.844 13:06:24 -- common/autotest_common.sh@10 -- # set +x 00:44:05.844 13:06:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:44:05.844 13:06:24 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:44:05.844 13:06:24 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:44:05.844 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:44:05.844 13:06:24 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:44:05.844 13:06:24 -- common/autotest_common.sh@1198 -- # local i=0 00:44:05.844 13:06:24 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK4 00:44:05.844 13:06:24 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:44:05.844 13:06:24 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:44:05.844 13:06:24 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK4 00:44:05.844 13:06:24 -- common/autotest_common.sh@1210 -- # return 0 00:44:05.844 13:06:24 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:44:05.844 13:06:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:44:05.844 13:06:24 -- common/autotest_common.sh@10 -- # set +x 00:44:05.844 13:06:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:44:05.844 13:06:24 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:44:05.844 13:06:24 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:44:05.844 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:44:05.844 13:06:24 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:44:05.844 13:06:24 -- common/autotest_common.sh@1198 -- # local i=0 00:44:05.844 13:06:24 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK5 00:44:05.844 13:06:24 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:44:05.844 13:06:24 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:44:05.844 13:06:24 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK5 00:44:05.844 13:06:24 -- common/autotest_common.sh@1210 -- # return 0 00:44:05.844 13:06:24 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:44:05.844 13:06:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:44:05.844 13:06:24 -- common/autotest_common.sh@10 -- # set +x 00:44:05.844 13:06:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:44:05.844 13:06:24 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:44:05.844 13:06:24 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:44:05.844 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:44:05.844 13:06:24 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:44:05.844 13:06:24 -- common/autotest_common.sh@1198 -- # local i=0 00:44:05.844 13:06:24 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:44:05.844 13:06:24 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK6 00:44:05.844 13:06:24 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK6 00:44:05.844 13:06:24 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:44:05.844 13:06:24 -- common/autotest_common.sh@1210 -- # return 0 00:44:05.844 13:06:24 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:44:05.844 13:06:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:44:05.844 13:06:24 -- common/autotest_common.sh@10 -- # set +x 00:44:05.844 13:06:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:44:05.844 13:06:24 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:44:05.844 13:06:24 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:44:05.844 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:44:05.844 13:06:24 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:44:05.844 13:06:24 -- common/autotest_common.sh@1198 -- # local i=0 00:44:05.844 13:06:24 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:44:05.844 13:06:24 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK7 00:44:05.844 13:06:24 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:44:05.844 13:06:24 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK7 00:44:05.844 13:06:24 -- common/autotest_common.sh@1210 -- # return 0 00:44:05.844 13:06:24 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:44:05.844 13:06:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:44:05.844 13:06:24 -- common/autotest_common.sh@10 -- # set +x 00:44:05.844 13:06:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:44:05.844 13:06:24 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:44:05.844 13:06:24 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:44:05.844 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:44:05.844 13:06:24 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:44:05.844 13:06:24 -- common/autotest_common.sh@1198 -- # local i=0 00:44:05.844 13:06:24 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:44:05.844 13:06:24 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK8 00:44:05.844 13:06:24 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:44:05.844 13:06:24 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK8 00:44:05.844 13:06:24 -- common/autotest_common.sh@1210 -- # return 0 00:44:05.844 13:06:24 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:44:05.844 13:06:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:44:05.844 13:06:24 -- common/autotest_common.sh@10 -- # set +x 00:44:05.844 13:06:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:44:05.844 13:06:24 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:44:05.844 13:06:24 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:44:05.844 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:44:05.844 13:06:24 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:44:05.844 13:06:25 -- common/autotest_common.sh@1198 -- # local i=0 00:44:05.844 13:06:25 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:44:05.844 13:06:25 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK9 00:44:05.844 13:06:25 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:44:05.844 13:06:25 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK9 00:44:05.844 13:06:25 -- common/autotest_common.sh@1210 -- # return 0 00:44:05.844 13:06:25 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:44:05.844 13:06:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:44:05.844 13:06:25 -- common/autotest_common.sh@10 -- # set +x 00:44:05.844 13:06:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:44:05.844 13:06:25 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:44:05.844 13:06:25 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:44:05.844 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:44:05.844 13:06:25 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:44:05.844 13:06:25 -- common/autotest_common.sh@1198 -- # local i=0 00:44:05.844 13:06:25 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:44:05.844 13:06:25 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK10 00:44:05.844 13:06:25 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK10 00:44:05.844 13:06:25 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:44:05.844 13:06:25 -- common/autotest_common.sh@1210 -- # return 0 00:44:05.844 13:06:25 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:44:05.844 13:06:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:44:05.844 13:06:25 -- common/autotest_common.sh@10 -- # set +x 00:44:05.844 13:06:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:44:05.844 13:06:25 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:44:05.844 13:06:25 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:44:05.844 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:44:05.844 13:06:25 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:44:05.844 13:06:25 -- common/autotest_common.sh@1198 -- # local i=0 00:44:05.844 13:06:25 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:44:05.844 13:06:25 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK11 00:44:05.844 13:06:25 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:44:05.844 13:06:25 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK11 00:44:05.844 13:06:25 -- common/autotest_common.sh@1210 -- # return 0 00:44:05.844 13:06:25 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:44:05.844 13:06:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:44:05.844 13:06:25 -- common/autotest_common.sh@10 -- # set +x 00:44:05.844 13:06:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:44:05.844 13:06:25 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:44:05.844 13:06:25 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:44:05.844 13:06:25 -- target/multiconnection.sh@47 -- # nvmftestfini 00:44:05.845 13:06:25 -- nvmf/common.sh@476 -- # nvmfcleanup 00:44:05.845 13:06:25 -- nvmf/common.sh@116 -- # sync 00:44:05.845 13:06:25 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:44:05.845 13:06:25 -- nvmf/common.sh@119 -- # set +e 00:44:05.845 13:06:25 -- nvmf/common.sh@120 -- # for i in {1..20} 00:44:05.845 13:06:25 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:44:05.845 rmmod nvme_tcp 00:44:05.845 rmmod nvme_fabrics 00:44:05.845 rmmod nvme_keyring 00:44:05.845 13:06:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:44:05.845 13:06:25 -- nvmf/common.sh@123 -- # set -e 00:44:05.845 13:06:25 -- nvmf/common.sh@124 -- # return 0 00:44:05.845 13:06:25 -- nvmf/common.sh@477 -- # '[' -n 89937 ']' 00:44:05.845 13:06:25 -- nvmf/common.sh@478 -- # killprocess 89937 00:44:05.845 13:06:25 -- common/autotest_common.sh@926 -- # '[' -z 89937 ']' 00:44:05.845 13:06:25 -- common/autotest_common.sh@930 -- # kill -0 89937 00:44:05.845 13:06:25 -- common/autotest_common.sh@931 -- # uname 00:44:05.845 13:06:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:44:05.845 13:06:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 89937 00:44:06.114 killing process with pid 89937 00:44:06.114 13:06:25 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:44:06.114 13:06:25 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:44:06.114 13:06:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 89937' 00:44:06.114 13:06:25 -- common/autotest_common.sh@945 -- # kill 89937 00:44:06.114 13:06:25 -- common/autotest_common.sh@950 -- # wait 89937 00:44:06.372 13:06:25 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:44:06.372 13:06:25 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:44:06.372 13:06:25 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:44:06.372 13:06:25 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:44:06.372 13:06:25 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:44:06.372 13:06:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:06.372 13:06:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:44:06.372 13:06:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:06.372 13:06:25 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:44:06.372 ************************************ 00:44:06.372 END TEST nvmf_multiconnection 00:44:06.372 ************************************ 00:44:06.372 00:44:06.372 real 0m49.530s 00:44:06.372 user 2m43.348s 00:44:06.372 sys 0m27.746s 00:44:06.372 13:06:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:44:06.372 13:06:25 -- common/autotest_common.sh@10 -- # set +x 00:44:06.630 13:06:25 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:44:06.630 13:06:25 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:44:06.630 13:06:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:44:06.630 13:06:25 -- common/autotest_common.sh@10 -- # set +x 00:44:06.630 ************************************ 00:44:06.630 START TEST nvmf_initiator_timeout 00:44:06.630 ************************************ 00:44:06.630 13:06:25 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:44:06.630 * Looking for test storage... 00:44:06.630 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:44:06.630 13:06:25 -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:44:06.630 13:06:25 -- nvmf/common.sh@7 -- # uname -s 00:44:06.630 13:06:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:06.630 13:06:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:06.630 13:06:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:06.630 13:06:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:06.630 13:06:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:06.630 13:06:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:06.630 13:06:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:06.630 13:06:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:06.630 13:06:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:06.630 13:06:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:06.630 13:06:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:864ae4a3-cd96-439b-8ef2-6dab4d992115 00:44:06.630 13:06:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=864ae4a3-cd96-439b-8ef2-6dab4d992115 00:44:06.630 13:06:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:06.630 13:06:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:06.630 13:06:25 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:44:06.630 13:06:25 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:44:06.630 13:06:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:06.630 13:06:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:06.630 13:06:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:06.630 13:06:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:06.631 13:06:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:06.631 13:06:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:06.631 13:06:25 -- paths/export.sh@5 -- # export PATH 00:44:06.631 13:06:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:06.631 13:06:25 -- nvmf/common.sh@46 -- # : 0 00:44:06.631 13:06:25 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:44:06.631 13:06:25 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:44:06.631 13:06:25 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:44:06.631 13:06:25 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:06.631 13:06:25 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:06.631 13:06:25 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:44:06.631 13:06:25 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:44:06.631 13:06:25 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:44:06.631 13:06:25 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:44:06.631 13:06:25 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:44:06.631 13:06:25 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:44:06.631 13:06:25 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:44:06.631 13:06:25 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:44:06.631 13:06:25 -- nvmf/common.sh@436 -- # prepare_net_devs 00:44:06.631 13:06:25 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:44:06.631 13:06:25 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:44:06.631 13:06:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:06.631 13:06:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:44:06.631 13:06:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:06.631 13:06:25 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:44:06.631 13:06:25 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:44:06.631 13:06:25 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:44:06.631 13:06:25 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:44:06.631 13:06:25 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:44:06.631 13:06:25 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:44:06.631 13:06:25 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:44:06.631 13:06:25 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:44:06.631 13:06:25 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:44:06.631 13:06:25 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:44:06.631 13:06:25 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:44:06.631 13:06:25 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:44:06.631 13:06:25 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:44:06.631 13:06:25 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:44:06.631 13:06:25 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:44:06.631 13:06:25 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:44:06.631 13:06:25 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:44:06.631 13:06:25 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:44:06.631 13:06:25 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:44:06.631 13:06:25 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:44:06.631 Cannot find device "nvmf_tgt_br" 00:44:06.631 13:06:25 -- nvmf/common.sh@154 -- # true 00:44:06.631 13:06:25 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:44:06.631 Cannot find device "nvmf_tgt_br2" 00:44:06.631 13:06:25 -- nvmf/common.sh@155 -- # true 00:44:06.631 13:06:25 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:44:06.631 13:06:25 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:44:06.631 Cannot find device "nvmf_tgt_br" 00:44:06.631 13:06:25 -- nvmf/common.sh@157 -- # true 00:44:06.631 13:06:25 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:44:06.631 Cannot find device "nvmf_tgt_br2" 00:44:06.631 13:06:25 -- nvmf/common.sh@158 -- # true 00:44:06.631 13:06:25 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:44:06.631 13:06:26 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:44:06.889 13:06:26 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:44:06.889 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:44:06.889 13:06:26 -- nvmf/common.sh@161 -- # true 00:44:06.889 13:06:26 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:44:06.889 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:44:06.889 13:06:26 -- nvmf/common.sh@162 -- # true 00:44:06.889 13:06:26 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:44:06.889 13:06:26 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:44:06.889 13:06:26 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:44:06.889 13:06:26 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:44:06.889 13:06:26 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:44:06.889 13:06:26 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:44:06.889 13:06:26 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:44:06.889 13:06:26 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:44:06.889 13:06:26 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:44:06.889 13:06:26 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:44:06.889 13:06:26 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:44:06.889 13:06:26 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:44:06.889 13:06:26 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:44:06.889 13:06:26 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:44:06.889 13:06:26 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:44:06.889 13:06:26 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:44:06.889 13:06:26 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:44:06.889 13:06:26 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:44:06.889 13:06:26 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:44:06.889 13:06:26 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:44:06.889 13:06:26 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:44:06.889 13:06:26 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:44:06.889 13:06:26 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:44:06.889 13:06:26 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:44:06.889 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:44:06.889 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:44:06.889 00:44:06.889 --- 10.0.0.2 ping statistics --- 00:44:06.889 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:06.889 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:44:06.889 13:06:26 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:44:06.889 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:44:06.889 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.102 ms 00:44:06.889 00:44:06.889 --- 10.0.0.3 ping statistics --- 00:44:06.889 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:06.889 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:44:06.889 13:06:26 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:44:06.889 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:44:06.889 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:44:06.889 00:44:06.889 --- 10.0.0.1 ping statistics --- 00:44:06.889 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:06.889 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:44:06.889 13:06:26 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:44:06.889 13:06:26 -- nvmf/common.sh@421 -- # return 0 00:44:06.889 13:06:26 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:44:06.890 13:06:26 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:44:06.890 13:06:26 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:44:06.890 13:06:26 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:44:06.890 13:06:26 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:44:06.890 13:06:26 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:44:06.890 13:06:26 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:44:06.890 13:06:26 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:44:06.890 13:06:26 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:44:06.890 13:06:26 -- common/autotest_common.sh@712 -- # xtrace_disable 00:44:06.890 13:06:26 -- common/autotest_common.sh@10 -- # set +x 00:44:06.890 13:06:26 -- nvmf/common.sh@469 -- # nvmfpid=91014 00:44:06.890 13:06:26 -- nvmf/common.sh@470 -- # waitforlisten 91014 00:44:06.890 13:06:26 -- common/autotest_common.sh@819 -- # '[' -z 91014 ']' 00:44:06.890 13:06:26 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:44:06.890 13:06:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:06.890 13:06:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:44:06.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:06.890 13:06:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:06.890 13:06:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:44:06.890 13:06:26 -- common/autotest_common.sh@10 -- # set +x 00:44:07.148 [2024-07-22 13:06:26.344434] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:44:07.148 [2024-07-22 13:06:26.344513] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:44:07.148 [2024-07-22 13:06:26.472763] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:44:07.148 [2024-07-22 13:06:26.545781] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:44:07.148 [2024-07-22 13:06:26.545936] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:44:07.148 [2024-07-22 13:06:26.545949] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:44:07.148 [2024-07-22 13:06:26.545957] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:44:07.148 [2024-07-22 13:06:26.546155] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:44:07.148 [2024-07-22 13:06:26.546274] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:44:07.148 [2024-07-22 13:06:26.546346] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:44:07.148 [2024-07-22 13:06:26.546348] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:44:08.082 13:06:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:44:08.082 13:06:27 -- common/autotest_common.sh@852 -- # return 0 00:44:08.082 13:06:27 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:44:08.082 13:06:27 -- common/autotest_common.sh@718 -- # xtrace_disable 00:44:08.082 13:06:27 -- common/autotest_common.sh@10 -- # set +x 00:44:08.082 13:06:27 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:44:08.082 13:06:27 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:44:08.082 13:06:27 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:44:08.082 13:06:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:44:08.082 13:06:27 -- common/autotest_common.sh@10 -- # set +x 00:44:08.082 Malloc0 00:44:08.082 13:06:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:44:08.082 13:06:27 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:44:08.082 13:06:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:44:08.082 13:06:27 -- common/autotest_common.sh@10 -- # set +x 00:44:08.082 Delay0 00:44:08.082 13:06:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:44:08.082 13:06:27 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:44:08.082 13:06:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:44:08.082 13:06:27 -- common/autotest_common.sh@10 -- # set +x 00:44:08.082 [2024-07-22 13:06:27.388024] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:08.082 13:06:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:44:08.082 13:06:27 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:44:08.082 13:06:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:44:08.082 13:06:27 -- common/autotest_common.sh@10 -- # set +x 00:44:08.082 13:06:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:44:08.082 13:06:27 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:44:08.082 13:06:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:44:08.082 13:06:27 -- common/autotest_common.sh@10 -- # set +x 00:44:08.082 13:06:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:44:08.082 13:06:27 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:44:08.082 13:06:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:44:08.082 13:06:27 -- common/autotest_common.sh@10 -- # set +x 00:44:08.082 [2024-07-22 13:06:27.416217] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:08.082 13:06:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:44:08.082 13:06:27 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:864ae4a3-cd96-439b-8ef2-6dab4d992115 --hostid=864ae4a3-cd96-439b-8ef2-6dab4d992115 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:44:08.341 13:06:27 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:44:08.341 13:06:27 -- common/autotest_common.sh@1177 -- # local i=0 00:44:08.341 13:06:27 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:44:08.341 13:06:27 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:44:08.341 13:06:27 -- common/autotest_common.sh@1184 -- # sleep 2 00:44:10.242 13:06:29 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:44:10.242 13:06:29 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:44:10.242 13:06:29 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:44:10.242 13:06:29 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:44:10.242 13:06:29 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:44:10.242 13:06:29 -- common/autotest_common.sh@1187 -- # return 0 00:44:10.242 13:06:29 -- target/initiator_timeout.sh@35 -- # fio_pid=91096 00:44:10.242 13:06:29 -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:44:10.242 13:06:29 -- target/initiator_timeout.sh@37 -- # sleep 3 00:44:10.242 [global] 00:44:10.242 thread=1 00:44:10.242 invalidate=1 00:44:10.242 rw=write 00:44:10.242 time_based=1 00:44:10.242 runtime=60 00:44:10.242 ioengine=libaio 00:44:10.242 direct=1 00:44:10.242 bs=4096 00:44:10.242 iodepth=1 00:44:10.242 norandommap=0 00:44:10.242 numjobs=1 00:44:10.242 00:44:10.242 verify_dump=1 00:44:10.242 verify_backlog=512 00:44:10.242 verify_state_save=0 00:44:10.242 do_verify=1 00:44:10.242 verify=crc32c-intel 00:44:10.242 [job0] 00:44:10.242 filename=/dev/nvme0n1 00:44:10.242 Could not set queue depth (nvme0n1) 00:44:10.500 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:44:10.500 fio-3.35 00:44:10.500 Starting 1 thread 00:44:13.784 13:06:32 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:44:13.784 13:06:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:44:13.784 13:06:32 -- common/autotest_common.sh@10 -- # set +x 00:44:13.784 true 00:44:13.784 13:06:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:44:13.784 13:06:32 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:44:13.784 13:06:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:44:13.784 13:06:32 -- common/autotest_common.sh@10 -- # set +x 00:44:13.784 true 00:44:13.784 13:06:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:44:13.784 13:06:32 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:44:13.784 13:06:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:44:13.784 13:06:32 -- common/autotest_common.sh@10 -- # set +x 00:44:13.784 true 00:44:13.784 13:06:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:44:13.784 13:06:32 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:44:13.784 13:06:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:44:13.784 13:06:32 -- common/autotest_common.sh@10 -- # set +x 00:44:13.784 true 00:44:13.784 13:06:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:44:13.784 13:06:32 -- target/initiator_timeout.sh@45 -- # sleep 3 00:44:16.315 13:06:35 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:44:16.315 13:06:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:44:16.315 13:06:35 -- common/autotest_common.sh@10 -- # set +x 00:44:16.315 true 00:44:16.315 13:06:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:44:16.315 13:06:35 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:44:16.315 13:06:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:44:16.315 13:06:35 -- common/autotest_common.sh@10 -- # set +x 00:44:16.315 true 00:44:16.315 13:06:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:44:16.315 13:06:35 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:44:16.315 13:06:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:44:16.315 13:06:35 -- common/autotest_common.sh@10 -- # set +x 00:44:16.315 true 00:44:16.315 13:06:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:44:16.315 13:06:35 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:44:16.315 13:06:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:44:16.315 13:06:35 -- common/autotest_common.sh@10 -- # set +x 00:44:16.315 true 00:44:16.315 13:06:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:44:16.315 13:06:35 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:44:16.315 13:06:35 -- target/initiator_timeout.sh@54 -- # wait 91096 00:45:12.624 00:45:12.624 job0: (groupid=0, jobs=1): err= 0: pid=91117: Mon Jul 22 13:07:29 2024 00:45:12.624 read: IOPS=891, BW=3567KiB/s (3653kB/s)(209MiB/60000msec) 00:45:12.624 slat (usec): min=12, max=12883, avg=15.33, stdev=65.29 00:45:12.624 clat (usec): min=149, max=40473k, avg=936.17, stdev=174961.77 00:45:12.624 lat (usec): min=162, max=40473k, avg=951.50, stdev=174961.78 00:45:12.624 clat percentiles (usec): 00:45:12.624 | 1.00th=[ 157], 5.00th=[ 161], 10.00th=[ 163], 20.00th=[ 167], 00:45:12.624 | 30.00th=[ 169], 40.00th=[ 172], 50.00th=[ 176], 60.00th=[ 180], 00:45:12.624 | 70.00th=[ 186], 80.00th=[ 194], 90.00th=[ 202], 95.00th=[ 212], 00:45:12.624 | 99.00th=[ 229], 99.50th=[ 239], 99.90th=[ 269], 99.95th=[ 306], 00:45:12.624 | 99.99th=[ 619] 00:45:12.624 write: IOPS=896, BW=3584KiB/s (3670kB/s)(210MiB/60000msec); 0 zone resets 00:45:12.624 slat (usec): min=18, max=559, avg=21.92, stdev= 6.91 00:45:12.624 clat (usec): min=118, max=1554, avg=143.76, stdev=19.10 00:45:12.624 lat (usec): min=137, max=1574, avg=165.68, stdev=20.71 00:45:12.624 clat percentiles (usec): 00:45:12.624 | 1.00th=[ 125], 5.00th=[ 128], 10.00th=[ 130], 20.00th=[ 133], 00:45:12.624 | 30.00th=[ 135], 40.00th=[ 137], 50.00th=[ 139], 60.00th=[ 143], 00:45:12.624 | 70.00th=[ 149], 80.00th=[ 155], 90.00th=[ 165], 95.00th=[ 174], 00:45:12.624 | 99.00th=[ 192], 99.50th=[ 200], 99.90th=[ 241], 99.95th=[ 306], 00:45:12.624 | 99.99th=[ 578] 00:45:12.624 bw ( KiB/s): min= 4720, max=12288, per=100.00%, avg=10813.13, stdev=1621.88, samples=39 00:45:12.624 iops : min= 1180, max= 3072, avg=2703.28, stdev=405.47, samples=39 00:45:12.624 lat (usec) : 250=99.83%, 500=0.15%, 750=0.01% 00:45:12.624 lat (msec) : 2=0.01%, >=2000=0.01% 00:45:12.624 cpu : usr=0.63%, sys=2.40%, ctx=107291, majf=0, minf=2 00:45:12.624 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:12.624 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:12.624 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:12.624 issued rwts: total=53510,53760,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:12.624 latency : target=0, window=0, percentile=100.00%, depth=1 00:45:12.624 00:45:12.624 Run status group 0 (all jobs): 00:45:12.624 READ: bw=3567KiB/s (3653kB/s), 3567KiB/s-3567KiB/s (3653kB/s-3653kB/s), io=209MiB (219MB), run=60000-60000msec 00:45:12.624 WRITE: bw=3584KiB/s (3670kB/s), 3584KiB/s-3584KiB/s (3670kB/s-3670kB/s), io=210MiB (220MB), run=60000-60000msec 00:45:12.624 00:45:12.624 Disk stats (read/write): 00:45:12.624 nvme0n1: ios=53541/53479, merge=0/0, ticks=10153/8452, in_queue=18605, util=99.78% 00:45:12.624 13:07:29 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:45:12.624 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:45:12.624 13:07:30 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:45:12.624 13:07:30 -- common/autotest_common.sh@1198 -- # local i=0 00:45:12.624 13:07:30 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:45:12.624 13:07:30 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:45:12.624 13:07:30 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:45:12.624 13:07:30 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:45:12.624 13:07:30 -- common/autotest_common.sh@1210 -- # return 0 00:45:12.624 13:07:30 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:45:12.624 13:07:30 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:45:12.624 nvmf hotplug test: fio successful as expected 00:45:12.624 13:07:30 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:45:12.624 13:07:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:45:12.624 13:07:30 -- common/autotest_common.sh@10 -- # set +x 00:45:12.624 13:07:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:45:12.624 13:07:30 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:45:12.624 13:07:30 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:45:12.624 13:07:30 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:45:12.624 13:07:30 -- nvmf/common.sh@476 -- # nvmfcleanup 00:45:12.624 13:07:30 -- nvmf/common.sh@116 -- # sync 00:45:12.624 13:07:30 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:45:12.624 13:07:30 -- nvmf/common.sh@119 -- # set +e 00:45:12.624 13:07:30 -- nvmf/common.sh@120 -- # for i in {1..20} 00:45:12.624 13:07:30 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:45:12.624 rmmod nvme_tcp 00:45:12.624 rmmod nvme_fabrics 00:45:12.624 rmmod nvme_keyring 00:45:12.624 13:07:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:45:12.624 13:07:30 -- nvmf/common.sh@123 -- # set -e 00:45:12.624 13:07:30 -- nvmf/common.sh@124 -- # return 0 00:45:12.624 13:07:30 -- nvmf/common.sh@477 -- # '[' -n 91014 ']' 00:45:12.624 13:07:30 -- nvmf/common.sh@478 -- # killprocess 91014 00:45:12.624 13:07:30 -- common/autotest_common.sh@926 -- # '[' -z 91014 ']' 00:45:12.624 13:07:30 -- common/autotest_common.sh@930 -- # kill -0 91014 00:45:12.624 13:07:30 -- common/autotest_common.sh@931 -- # uname 00:45:12.624 13:07:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:45:12.624 13:07:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 91014 00:45:12.624 killing process with pid 91014 00:45:12.624 13:07:30 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:45:12.624 13:07:30 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:45:12.624 13:07:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 91014' 00:45:12.624 13:07:30 -- common/autotest_common.sh@945 -- # kill 91014 00:45:12.624 13:07:30 -- common/autotest_common.sh@950 -- # wait 91014 00:45:12.624 13:07:30 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:45:12.624 13:07:30 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:45:12.624 13:07:30 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:45:12.624 13:07:30 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:45:12.624 13:07:30 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:45:12.624 13:07:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:12.624 13:07:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:45:12.624 13:07:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:12.624 13:07:30 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:45:12.624 ************************************ 00:45:12.625 END TEST nvmf_initiator_timeout 00:45:12.625 ************************************ 00:45:12.625 00:45:12.625 real 1m4.603s 00:45:12.625 user 4m5.166s 00:45:12.625 sys 0m10.443s 00:45:12.625 13:07:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:45:12.625 13:07:30 -- common/autotest_common.sh@10 -- # set +x 00:45:12.625 13:07:30 -- nvmf/nvmf.sh@69 -- # [[ virt == phy ]] 00:45:12.625 13:07:30 -- nvmf/nvmf.sh@86 -- # timing_exit target 00:45:12.625 13:07:30 -- common/autotest_common.sh@718 -- # xtrace_disable 00:45:12.625 13:07:30 -- common/autotest_common.sh@10 -- # set +x 00:45:12.625 13:07:30 -- nvmf/nvmf.sh@88 -- # timing_enter host 00:45:12.625 13:07:30 -- common/autotest_common.sh@712 -- # xtrace_disable 00:45:12.625 13:07:30 -- common/autotest_common.sh@10 -- # set +x 00:45:12.625 13:07:30 -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:45:12.625 13:07:30 -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:45:12.625 13:07:30 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:45:12.625 13:07:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:45:12.625 13:07:30 -- common/autotest_common.sh@10 -- # set +x 00:45:12.625 ************************************ 00:45:12.625 START TEST nvmf_multicontroller 00:45:12.625 ************************************ 00:45:12.625 13:07:30 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:45:12.625 * Looking for test storage... 00:45:12.625 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:45:12.625 13:07:30 -- host/multicontroller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:45:12.625 13:07:30 -- nvmf/common.sh@7 -- # uname -s 00:45:12.625 13:07:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:45:12.625 13:07:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:45:12.625 13:07:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:45:12.625 13:07:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:45:12.625 13:07:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:45:12.625 13:07:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:45:12.625 13:07:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:45:12.625 13:07:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:45:12.625 13:07:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:45:12.625 13:07:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:45:12.625 13:07:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:864ae4a3-cd96-439b-8ef2-6dab4d992115 00:45:12.625 13:07:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=864ae4a3-cd96-439b-8ef2-6dab4d992115 00:45:12.625 13:07:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:45:12.625 13:07:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:45:12.625 13:07:30 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:45:12.625 13:07:30 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:45:12.625 13:07:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:12.625 13:07:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:12.625 13:07:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:12.625 13:07:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:12.625 13:07:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:12.625 13:07:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:12.625 13:07:30 -- paths/export.sh@5 -- # export PATH 00:45:12.625 13:07:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:12.625 13:07:30 -- nvmf/common.sh@46 -- # : 0 00:45:12.625 13:07:30 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:45:12.625 13:07:30 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:45:12.625 13:07:30 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:45:12.625 13:07:30 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:45:12.625 13:07:30 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:45:12.625 13:07:30 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:45:12.625 13:07:30 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:45:12.625 13:07:30 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:45:12.625 13:07:30 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:45:12.625 13:07:30 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:45:12.625 13:07:30 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:45:12.625 13:07:30 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:45:12.625 13:07:30 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:45:12.625 13:07:30 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:45:12.625 13:07:30 -- host/multicontroller.sh@23 -- # nvmftestinit 00:45:12.625 13:07:30 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:45:12.625 13:07:30 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:45:12.625 13:07:30 -- nvmf/common.sh@436 -- # prepare_net_devs 00:45:12.625 13:07:30 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:45:12.625 13:07:30 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:45:12.625 13:07:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:12.625 13:07:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:45:12.625 13:07:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:12.625 13:07:30 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:45:12.625 13:07:30 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:45:12.625 13:07:30 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:45:12.625 13:07:30 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:45:12.625 13:07:30 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:45:12.625 13:07:30 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:45:12.625 13:07:30 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:45:12.625 13:07:30 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:45:12.625 13:07:30 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:45:12.625 13:07:30 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:45:12.625 13:07:30 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:45:12.625 13:07:30 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:45:12.625 13:07:30 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:45:12.625 13:07:30 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:45:12.625 13:07:30 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:45:12.625 13:07:30 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:45:12.625 13:07:30 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:45:12.625 13:07:30 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:45:12.625 13:07:30 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:45:12.625 13:07:30 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:45:12.625 Cannot find device "nvmf_tgt_br" 00:45:12.625 13:07:30 -- nvmf/common.sh@154 -- # true 00:45:12.625 13:07:30 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:45:12.625 Cannot find device "nvmf_tgt_br2" 00:45:12.625 13:07:30 -- nvmf/common.sh@155 -- # true 00:45:12.625 13:07:30 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:45:12.625 13:07:30 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:45:12.625 Cannot find device "nvmf_tgt_br" 00:45:12.625 13:07:30 -- nvmf/common.sh@157 -- # true 00:45:12.625 13:07:30 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:45:12.625 Cannot find device "nvmf_tgt_br2" 00:45:12.625 13:07:30 -- nvmf/common.sh@158 -- # true 00:45:12.625 13:07:30 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:45:12.625 13:07:30 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:45:12.625 13:07:30 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:45:12.625 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:45:12.625 13:07:30 -- nvmf/common.sh@161 -- # true 00:45:12.625 13:07:30 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:45:12.625 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:45:12.625 13:07:30 -- nvmf/common.sh@162 -- # true 00:45:12.625 13:07:30 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:45:12.625 13:07:30 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:45:12.625 13:07:30 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:45:12.625 13:07:30 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:45:12.625 13:07:30 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:45:12.625 13:07:30 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:45:12.625 13:07:30 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:45:12.625 13:07:30 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:45:12.625 13:07:30 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:45:12.625 13:07:30 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:45:12.625 13:07:30 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:45:12.625 13:07:30 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:45:12.625 13:07:30 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:45:12.625 13:07:30 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:45:12.625 13:07:30 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:45:12.625 13:07:30 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:45:12.626 13:07:30 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:45:12.626 13:07:30 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:45:12.626 13:07:30 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:45:12.626 13:07:30 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:45:12.626 13:07:30 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:45:12.626 13:07:30 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:45:12.626 13:07:30 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:45:12.626 13:07:30 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:45:12.626 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:45:12.626 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:45:12.626 00:45:12.626 --- 10.0.0.2 ping statistics --- 00:45:12.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:12.626 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:45:12.626 13:07:30 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:45:12.626 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:45:12.626 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:45:12.626 00:45:12.626 --- 10.0.0.3 ping statistics --- 00:45:12.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:12.626 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:45:12.626 13:07:30 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:45:12.626 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:45:12.626 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:45:12.626 00:45:12.626 --- 10.0.0.1 ping statistics --- 00:45:12.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:12.626 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:45:12.626 13:07:30 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:45:12.626 13:07:30 -- nvmf/common.sh@421 -- # return 0 00:45:12.626 13:07:30 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:45:12.626 13:07:30 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:45:12.626 13:07:30 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:45:12.626 13:07:30 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:45:12.626 13:07:30 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:45:12.626 13:07:30 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:45:12.626 13:07:30 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:45:12.626 13:07:30 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:45:12.626 13:07:30 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:45:12.626 13:07:30 -- common/autotest_common.sh@712 -- # xtrace_disable 00:45:12.626 13:07:30 -- common/autotest_common.sh@10 -- # set +x 00:45:12.626 13:07:30 -- nvmf/common.sh@469 -- # nvmfpid=91954 00:45:12.626 13:07:30 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:45:12.626 13:07:30 -- nvmf/common.sh@470 -- # waitforlisten 91954 00:45:12.626 13:07:30 -- common/autotest_common.sh@819 -- # '[' -z 91954 ']' 00:45:12.626 13:07:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:12.626 13:07:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:45:12.626 13:07:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:12.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:12.626 13:07:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:45:12.626 13:07:30 -- common/autotest_common.sh@10 -- # set +x 00:45:12.626 [2024-07-22 13:07:30.997849] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:45:12.626 [2024-07-22 13:07:30.997941] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:45:12.626 [2024-07-22 13:07:31.131801] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:45:12.626 [2024-07-22 13:07:31.189081] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:45:12.626 [2024-07-22 13:07:31.189252] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:45:12.626 [2024-07-22 13:07:31.189265] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:45:12.626 [2024-07-22 13:07:31.189273] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:45:12.626 [2024-07-22 13:07:31.189706] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:45:12.626 [2024-07-22 13:07:31.189892] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:45:12.626 [2024-07-22 13:07:31.189898] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:45:12.626 13:07:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:45:12.626 13:07:31 -- common/autotest_common.sh@852 -- # return 0 00:45:12.626 13:07:31 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:45:12.626 13:07:31 -- common/autotest_common.sh@718 -- # xtrace_disable 00:45:12.626 13:07:31 -- common/autotest_common.sh@10 -- # set +x 00:45:12.626 13:07:32 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:45:12.626 13:07:32 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:45:12.626 13:07:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:45:12.626 13:07:32 -- common/autotest_common.sh@10 -- # set +x 00:45:12.626 [2024-07-22 13:07:32.020257] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:45:12.626 13:07:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:45:12.626 13:07:32 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:45:12.626 13:07:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:45:12.626 13:07:32 -- common/autotest_common.sh@10 -- # set +x 00:45:12.885 Malloc0 00:45:12.885 13:07:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:45:12.885 13:07:32 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:45:12.885 13:07:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:45:12.885 13:07:32 -- common/autotest_common.sh@10 -- # set +x 00:45:12.885 13:07:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:45:12.885 13:07:32 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:45:12.885 13:07:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:45:12.885 13:07:32 -- common/autotest_common.sh@10 -- # set +x 00:45:12.885 13:07:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:45:12.885 13:07:32 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:45:12.885 13:07:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:45:12.885 13:07:32 -- common/autotest_common.sh@10 -- # set +x 00:45:12.885 [2024-07-22 13:07:32.090273] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:12.885 13:07:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:45:12.885 13:07:32 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:45:12.885 13:07:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:45:12.885 13:07:32 -- common/autotest_common.sh@10 -- # set +x 00:45:12.885 [2024-07-22 13:07:32.098198] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:45:12.885 13:07:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:45:12.885 13:07:32 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:45:12.885 13:07:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:45:12.885 13:07:32 -- common/autotest_common.sh@10 -- # set +x 00:45:12.885 Malloc1 00:45:12.885 13:07:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:45:12.885 13:07:32 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:45:12.885 13:07:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:45:12.885 13:07:32 -- common/autotest_common.sh@10 -- # set +x 00:45:12.885 13:07:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:45:12.885 13:07:32 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:45:12.885 13:07:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:45:12.885 13:07:32 -- common/autotest_common.sh@10 -- # set +x 00:45:12.885 13:07:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:45:12.885 13:07:32 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:45:12.885 13:07:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:45:12.885 13:07:32 -- common/autotest_common.sh@10 -- # set +x 00:45:12.885 13:07:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:45:12.885 13:07:32 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:45:12.885 13:07:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:45:12.885 13:07:32 -- common/autotest_common.sh@10 -- # set +x 00:45:12.885 13:07:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:45:12.885 13:07:32 -- host/multicontroller.sh@44 -- # bdevperf_pid=92006 00:45:12.885 13:07:32 -- host/multicontroller.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:45:12.885 13:07:32 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:45:12.885 13:07:32 -- host/multicontroller.sh@47 -- # waitforlisten 92006 /var/tmp/bdevperf.sock 00:45:12.885 13:07:32 -- common/autotest_common.sh@819 -- # '[' -z 92006 ']' 00:45:12.885 13:07:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:45:12.885 13:07:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:45:12.885 13:07:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:45:12.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:45:12.885 13:07:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:45:12.885 13:07:32 -- common/autotest_common.sh@10 -- # set +x 00:45:14.260 13:07:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:45:14.260 13:07:33 -- common/autotest_common.sh@852 -- # return 0 00:45:14.260 13:07:33 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:45:14.260 13:07:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:45:14.260 13:07:33 -- common/autotest_common.sh@10 -- # set +x 00:45:14.260 NVMe0n1 00:45:14.260 13:07:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:45:14.260 13:07:33 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:45:14.260 13:07:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:45:14.260 13:07:33 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:45:14.260 13:07:33 -- common/autotest_common.sh@10 -- # set +x 00:45:14.260 13:07:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:45:14.260 1 00:45:14.260 13:07:33 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:45:14.260 13:07:33 -- common/autotest_common.sh@640 -- # local es=0 00:45:14.260 13:07:33 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:45:14.260 13:07:33 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:45:14.260 13:07:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:45:14.260 13:07:33 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:45:14.260 13:07:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:45:14.260 13:07:33 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:45:14.260 13:07:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:45:14.260 13:07:33 -- common/autotest_common.sh@10 -- # set +x 00:45:14.260 2024/07/22 13:07:33 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostnqn:nqn.2021-09-7.io.spdk:00001 hostsvcid:60000 name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:45:14.260 request: 00:45:14.260 { 00:45:14.260 "method": "bdev_nvme_attach_controller", 00:45:14.260 "params": { 00:45:14.260 "name": "NVMe0", 00:45:14.260 "trtype": "tcp", 00:45:14.260 "traddr": "10.0.0.2", 00:45:14.260 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:45:14.260 "hostaddr": "10.0.0.2", 00:45:14.260 "hostsvcid": "60000", 00:45:14.260 "adrfam": "ipv4", 00:45:14.260 "trsvcid": "4420", 00:45:14.260 "subnqn": "nqn.2016-06.io.spdk:cnode1" 00:45:14.260 } 00:45:14.260 } 00:45:14.260 Got JSON-RPC error response 00:45:14.260 GoRPCClient: error on JSON-RPC call 00:45:14.260 13:07:33 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:45:14.260 13:07:33 -- common/autotest_common.sh@643 -- # es=1 00:45:14.260 13:07:33 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:45:14.260 13:07:33 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:45:14.260 13:07:33 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:45:14.260 13:07:33 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:45:14.260 13:07:33 -- common/autotest_common.sh@640 -- # local es=0 00:45:14.260 13:07:33 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:45:14.260 13:07:33 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:45:14.260 13:07:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:45:14.260 13:07:33 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:45:14.260 13:07:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:45:14.260 13:07:33 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:45:14.260 13:07:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:45:14.260 13:07:33 -- common/autotest_common.sh@10 -- # set +x 00:45:14.260 2024/07/22 13:07:33 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:45:14.260 request: 00:45:14.260 { 00:45:14.260 "method": "bdev_nvme_attach_controller", 00:45:14.260 "params": { 00:45:14.260 "name": "NVMe0", 00:45:14.260 "trtype": "tcp", 00:45:14.260 "traddr": "10.0.0.2", 00:45:14.260 "hostaddr": "10.0.0.2", 00:45:14.260 "hostsvcid": "60000", 00:45:14.260 "adrfam": "ipv4", 00:45:14.260 "trsvcid": "4420", 00:45:14.260 "subnqn": "nqn.2016-06.io.spdk:cnode2" 00:45:14.260 } 00:45:14.260 } 00:45:14.261 Got JSON-RPC error response 00:45:14.261 GoRPCClient: error on JSON-RPC call 00:45:14.261 13:07:33 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:45:14.261 13:07:33 -- common/autotest_common.sh@643 -- # es=1 00:45:14.261 13:07:33 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:45:14.261 13:07:33 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:45:14.261 13:07:33 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:45:14.261 13:07:33 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:45:14.261 13:07:33 -- common/autotest_common.sh@640 -- # local es=0 00:45:14.261 13:07:33 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:45:14.261 13:07:33 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:45:14.261 13:07:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:45:14.261 13:07:33 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:45:14.261 13:07:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:45:14.261 13:07:33 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:45:14.261 13:07:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:45:14.261 13:07:33 -- common/autotest_common.sh@10 -- # set +x 00:45:14.261 2024/07/22 13:07:33 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 multipath:disable name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists and multipath is disabled 00:45:14.261 request: 00:45:14.261 { 00:45:14.261 "method": "bdev_nvme_attach_controller", 00:45:14.261 "params": { 00:45:14.261 "name": "NVMe0", 00:45:14.261 "trtype": "tcp", 00:45:14.261 "traddr": "10.0.0.2", 00:45:14.261 "hostaddr": "10.0.0.2", 00:45:14.261 "hostsvcid": "60000", 00:45:14.261 "adrfam": "ipv4", 00:45:14.261 "trsvcid": "4420", 00:45:14.261 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:45:14.261 "multipath": "disable" 00:45:14.261 } 00:45:14.261 } 00:45:14.261 Got JSON-RPC error response 00:45:14.261 GoRPCClient: error on JSON-RPC call 00:45:14.261 13:07:33 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:45:14.261 13:07:33 -- common/autotest_common.sh@643 -- # es=1 00:45:14.261 13:07:33 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:45:14.261 13:07:33 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:45:14.261 13:07:33 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:45:14.261 13:07:33 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:45:14.261 13:07:33 -- common/autotest_common.sh@640 -- # local es=0 00:45:14.261 13:07:33 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:45:14.261 13:07:33 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:45:14.261 13:07:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:45:14.261 13:07:33 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:45:14.261 13:07:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:45:14.261 13:07:33 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:45:14.261 13:07:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:45:14.261 13:07:33 -- common/autotest_common.sh@10 -- # set +x 00:45:14.261 2024/07/22 13:07:33 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 multipath:failover name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:45:14.261 request: 00:45:14.261 { 00:45:14.261 "method": "bdev_nvme_attach_controller", 00:45:14.261 "params": { 00:45:14.261 "name": "NVMe0", 00:45:14.261 "trtype": "tcp", 00:45:14.261 "traddr": "10.0.0.2", 00:45:14.261 "hostaddr": "10.0.0.2", 00:45:14.261 "hostsvcid": "60000", 00:45:14.261 "adrfam": "ipv4", 00:45:14.261 "trsvcid": "4420", 00:45:14.261 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:45:14.261 "multipath": "failover" 00:45:14.261 } 00:45:14.261 } 00:45:14.261 Got JSON-RPC error response 00:45:14.261 GoRPCClient: error on JSON-RPC call 00:45:14.261 13:07:33 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:45:14.261 13:07:33 -- common/autotest_common.sh@643 -- # es=1 00:45:14.261 13:07:33 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:45:14.261 13:07:33 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:45:14.261 13:07:33 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:45:14.261 13:07:33 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:45:14.261 13:07:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:45:14.261 13:07:33 -- common/autotest_common.sh@10 -- # set +x 00:45:14.261 00:45:14.261 13:07:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:45:14.261 13:07:33 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:45:14.261 13:07:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:45:14.261 13:07:33 -- common/autotest_common.sh@10 -- # set +x 00:45:14.261 13:07:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:45:14.261 13:07:33 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:45:14.261 13:07:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:45:14.261 13:07:33 -- common/autotest_common.sh@10 -- # set +x 00:45:14.261 00:45:14.261 13:07:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:45:14.261 13:07:33 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:45:14.261 13:07:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:45:14.261 13:07:33 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:45:14.261 13:07:33 -- common/autotest_common.sh@10 -- # set +x 00:45:14.261 13:07:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:45:14.261 13:07:33 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:45:14.261 13:07:33 -- host/multicontroller.sh@95 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:45:15.639 0 00:45:15.639 13:07:34 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:45:15.639 13:07:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:45:15.639 13:07:34 -- common/autotest_common.sh@10 -- # set +x 00:45:15.639 13:07:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:45:15.639 13:07:34 -- host/multicontroller.sh@100 -- # killprocess 92006 00:45:15.639 13:07:34 -- common/autotest_common.sh@926 -- # '[' -z 92006 ']' 00:45:15.639 13:07:34 -- common/autotest_common.sh@930 -- # kill -0 92006 00:45:15.639 13:07:34 -- common/autotest_common.sh@931 -- # uname 00:45:15.639 13:07:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:45:15.639 13:07:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 92006 00:45:15.639 killing process with pid 92006 00:45:15.639 13:07:34 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:45:15.639 13:07:34 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:45:15.639 13:07:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 92006' 00:45:15.639 13:07:34 -- common/autotest_common.sh@945 -- # kill 92006 00:45:15.639 13:07:34 -- common/autotest_common.sh@950 -- # wait 92006 00:45:15.639 13:07:34 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:45:15.639 13:07:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:45:15.639 13:07:34 -- common/autotest_common.sh@10 -- # set +x 00:45:15.639 13:07:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:45:15.639 13:07:34 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:45:15.639 13:07:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:45:15.639 13:07:34 -- common/autotest_common.sh@10 -- # set +x 00:45:15.639 13:07:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:45:15.639 13:07:34 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:45:15.639 13:07:34 -- host/multicontroller.sh@107 -- # pap /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:45:15.639 13:07:34 -- common/autotest_common.sh@1597 -- # read -r file 00:45:15.639 13:07:34 -- common/autotest_common.sh@1596 -- # find /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt -type f 00:45:15.639 13:07:34 -- common/autotest_common.sh@1596 -- # sort -u 00:45:15.640 13:07:34 -- common/autotest_common.sh@1598 -- # cat 00:45:15.640 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:45:15.640 [2024-07-22 13:07:32.212487] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:45:15.640 [2024-07-22 13:07:32.212690] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92006 ] 00:45:15.640 [2024-07-22 13:07:32.354127] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:15.640 [2024-07-22 13:07:32.430820] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:45:15.640 [2024-07-22 13:07:33.545527] bdev.c:4553:bdev_name_add: *ERROR*: Bdev name 8b5db1eb-dca4-4418-98a7-d99283a15d6a already exists 00:45:15.640 [2024-07-22 13:07:33.545596] bdev.c:7603:bdev_register: *ERROR*: Unable to add uuid:8b5db1eb-dca4-4418-98a7-d99283a15d6a alias for bdev NVMe1n1 00:45:15.640 [2024-07-22 13:07:33.545647] bdev_nvme.c:4236:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:45:15.640 Running I/O for 1 seconds... 00:45:15.640 00:45:15.640 Latency(us) 00:45:15.640 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:15.640 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:45:15.640 NVMe0n1 : 1.00 21785.44 85.10 0.00 0.00 5863.27 2546.97 10068.71 00:45:15.640 =================================================================================================================== 00:45:15.640 Total : 21785.44 85.10 0.00 0.00 5863.27 2546.97 10068.71 00:45:15.640 Received shutdown signal, test time was about 1.000000 seconds 00:45:15.640 00:45:15.640 Latency(us) 00:45:15.640 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:15.640 =================================================================================================================== 00:45:15.640 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:45:15.640 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:45:15.640 13:07:34 -- common/autotest_common.sh@1603 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:45:15.640 13:07:34 -- common/autotest_common.sh@1597 -- # read -r file 00:45:15.640 13:07:34 -- host/multicontroller.sh@108 -- # nvmftestfini 00:45:15.640 13:07:34 -- nvmf/common.sh@476 -- # nvmfcleanup 00:45:15.640 13:07:34 -- nvmf/common.sh@116 -- # sync 00:45:15.640 13:07:35 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:45:15.640 13:07:35 -- nvmf/common.sh@119 -- # set +e 00:45:15.640 13:07:35 -- nvmf/common.sh@120 -- # for i in {1..20} 00:45:15.640 13:07:35 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:45:15.640 rmmod nvme_tcp 00:45:15.899 rmmod nvme_fabrics 00:45:15.899 rmmod nvme_keyring 00:45:15.899 13:07:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:45:15.899 13:07:35 -- nvmf/common.sh@123 -- # set -e 00:45:15.899 13:07:35 -- nvmf/common.sh@124 -- # return 0 00:45:15.899 13:07:35 -- nvmf/common.sh@477 -- # '[' -n 91954 ']' 00:45:15.899 13:07:35 -- nvmf/common.sh@478 -- # killprocess 91954 00:45:15.899 13:07:35 -- common/autotest_common.sh@926 -- # '[' -z 91954 ']' 00:45:15.899 13:07:35 -- common/autotest_common.sh@930 -- # kill -0 91954 00:45:15.899 13:07:35 -- common/autotest_common.sh@931 -- # uname 00:45:15.899 13:07:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:45:15.899 13:07:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 91954 00:45:15.899 killing process with pid 91954 00:45:15.899 13:07:35 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:45:15.899 13:07:35 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:45:15.899 13:07:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 91954' 00:45:15.899 13:07:35 -- common/autotest_common.sh@945 -- # kill 91954 00:45:15.899 13:07:35 -- common/autotest_common.sh@950 -- # wait 91954 00:45:16.159 13:07:35 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:45:16.159 13:07:35 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:45:16.159 13:07:35 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:45:16.159 13:07:35 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:45:16.159 13:07:35 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:45:16.159 13:07:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:16.159 13:07:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:45:16.159 13:07:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:16.159 13:07:35 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:45:16.159 ************************************ 00:45:16.159 END TEST nvmf_multicontroller 00:45:16.159 ************************************ 00:45:16.159 00:45:16.159 real 0m4.899s 00:45:16.159 user 0m15.777s 00:45:16.159 sys 0m1.036s 00:45:16.159 13:07:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:45:16.159 13:07:35 -- common/autotest_common.sh@10 -- # set +x 00:45:16.159 13:07:35 -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:45:16.159 13:07:35 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:45:16.159 13:07:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:45:16.159 13:07:35 -- common/autotest_common.sh@10 -- # set +x 00:45:16.159 ************************************ 00:45:16.159 START TEST nvmf_aer 00:45:16.159 ************************************ 00:45:16.159 13:07:35 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:45:16.159 * Looking for test storage... 00:45:16.159 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:45:16.159 13:07:35 -- host/aer.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:45:16.159 13:07:35 -- nvmf/common.sh@7 -- # uname -s 00:45:16.159 13:07:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:45:16.159 13:07:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:45:16.159 13:07:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:45:16.159 13:07:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:45:16.159 13:07:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:45:16.159 13:07:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:45:16.159 13:07:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:45:16.159 13:07:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:45:16.159 13:07:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:45:16.159 13:07:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:45:16.159 13:07:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:864ae4a3-cd96-439b-8ef2-6dab4d992115 00:45:16.159 13:07:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=864ae4a3-cd96-439b-8ef2-6dab4d992115 00:45:16.159 13:07:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:45:16.159 13:07:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:45:16.159 13:07:35 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:45:16.159 13:07:35 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:45:16.159 13:07:35 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:16.159 13:07:35 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:16.159 13:07:35 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:16.159 13:07:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:16.159 13:07:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:16.159 13:07:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:16.159 13:07:35 -- paths/export.sh@5 -- # export PATH 00:45:16.159 13:07:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:16.159 13:07:35 -- nvmf/common.sh@46 -- # : 0 00:45:16.159 13:07:35 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:45:16.159 13:07:35 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:45:16.159 13:07:35 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:45:16.159 13:07:35 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:45:16.159 13:07:35 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:45:16.159 13:07:35 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:45:16.159 13:07:35 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:45:16.159 13:07:35 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:45:16.159 13:07:35 -- host/aer.sh@11 -- # nvmftestinit 00:45:16.159 13:07:35 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:45:16.159 13:07:35 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:45:16.159 13:07:35 -- nvmf/common.sh@436 -- # prepare_net_devs 00:45:16.159 13:07:35 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:45:16.159 13:07:35 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:45:16.159 13:07:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:16.159 13:07:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:45:16.159 13:07:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:16.159 13:07:35 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:45:16.159 13:07:35 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:45:16.159 13:07:35 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:45:16.159 13:07:35 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:45:16.159 13:07:35 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:45:16.159 13:07:35 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:45:16.159 13:07:35 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:45:16.159 13:07:35 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:45:16.159 13:07:35 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:45:16.159 13:07:35 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:45:16.160 13:07:35 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:45:16.160 13:07:35 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:45:16.160 13:07:35 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:45:16.160 13:07:35 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:45:16.160 13:07:35 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:45:16.160 13:07:35 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:45:16.160 13:07:35 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:45:16.160 13:07:35 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:45:16.160 13:07:35 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:45:16.418 13:07:35 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:45:16.418 Cannot find device "nvmf_tgt_br" 00:45:16.418 13:07:35 -- nvmf/common.sh@154 -- # true 00:45:16.418 13:07:35 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:45:16.418 Cannot find device "nvmf_tgt_br2" 00:45:16.418 13:07:35 -- nvmf/common.sh@155 -- # true 00:45:16.418 13:07:35 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:45:16.418 13:07:35 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:45:16.418 Cannot find device "nvmf_tgt_br" 00:45:16.418 13:07:35 -- nvmf/common.sh@157 -- # true 00:45:16.418 13:07:35 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:45:16.418 Cannot find device "nvmf_tgt_br2" 00:45:16.418 13:07:35 -- nvmf/common.sh@158 -- # true 00:45:16.418 13:07:35 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:45:16.418 13:07:35 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:45:16.418 13:07:35 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:45:16.418 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:45:16.418 13:07:35 -- nvmf/common.sh@161 -- # true 00:45:16.418 13:07:35 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:45:16.418 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:45:16.418 13:07:35 -- nvmf/common.sh@162 -- # true 00:45:16.418 13:07:35 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:45:16.418 13:07:35 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:45:16.418 13:07:35 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:45:16.418 13:07:35 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:45:16.418 13:07:35 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:45:16.418 13:07:35 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:45:16.418 13:07:35 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:45:16.418 13:07:35 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:45:16.418 13:07:35 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:45:16.418 13:07:35 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:45:16.418 13:07:35 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:45:16.418 13:07:35 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:45:16.418 13:07:35 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:45:16.418 13:07:35 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:45:16.418 13:07:35 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:45:16.418 13:07:35 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:45:16.418 13:07:35 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:45:16.418 13:07:35 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:45:16.418 13:07:35 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:45:16.418 13:07:35 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:45:16.418 13:07:35 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:45:16.677 13:07:35 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:45:16.677 13:07:35 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:45:16.677 13:07:35 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:45:16.677 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:45:16.677 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:45:16.677 00:45:16.677 --- 10.0.0.2 ping statistics --- 00:45:16.677 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:16.677 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:45:16.677 13:07:35 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:45:16.677 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:45:16.677 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.033 ms 00:45:16.677 00:45:16.677 --- 10.0.0.3 ping statistics --- 00:45:16.677 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:16.677 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:45:16.677 13:07:35 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:45:16.677 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:45:16.677 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:45:16.677 00:45:16.677 --- 10.0.0.1 ping statistics --- 00:45:16.677 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:16.677 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:45:16.677 13:07:35 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:45:16.677 13:07:35 -- nvmf/common.sh@421 -- # return 0 00:45:16.677 13:07:35 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:45:16.677 13:07:35 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:45:16.677 13:07:35 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:45:16.677 13:07:35 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:45:16.677 13:07:35 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:45:16.677 13:07:35 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:45:16.677 13:07:35 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:45:16.677 13:07:35 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:45:16.677 13:07:35 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:45:16.677 13:07:35 -- common/autotest_common.sh@712 -- # xtrace_disable 00:45:16.677 13:07:35 -- common/autotest_common.sh@10 -- # set +x 00:45:16.677 13:07:35 -- nvmf/common.sh@469 -- # nvmfpid=92255 00:45:16.677 13:07:35 -- nvmf/common.sh@470 -- # waitforlisten 92255 00:45:16.677 13:07:35 -- common/autotest_common.sh@819 -- # '[' -z 92255 ']' 00:45:16.677 13:07:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:16.677 13:07:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:45:16.677 13:07:35 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:45:16.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:16.677 13:07:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:16.677 13:07:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:45:16.677 13:07:35 -- common/autotest_common.sh@10 -- # set +x 00:45:16.677 [2024-07-22 13:07:35.944269] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:45:16.677 [2024-07-22 13:07:35.944358] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:45:16.677 [2024-07-22 13:07:36.085481] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:45:16.936 [2024-07-22 13:07:36.150947] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:45:16.936 [2024-07-22 13:07:36.151105] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:45:16.936 [2024-07-22 13:07:36.151132] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:45:16.936 [2024-07-22 13:07:36.151140] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:45:16.936 [2024-07-22 13:07:36.151211] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:45:16.936 [2024-07-22 13:07:36.151724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:45:16.936 [2024-07-22 13:07:36.151858] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:45:16.936 [2024-07-22 13:07:36.151864] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:45:17.506 13:07:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:45:17.506 13:07:36 -- common/autotest_common.sh@852 -- # return 0 00:45:17.506 13:07:36 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:45:17.506 13:07:36 -- common/autotest_common.sh@718 -- # xtrace_disable 00:45:17.506 13:07:36 -- common/autotest_common.sh@10 -- # set +x 00:45:17.506 13:07:36 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:45:17.506 13:07:36 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:45:17.506 13:07:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:45:17.506 13:07:36 -- common/autotest_common.sh@10 -- # set +x 00:45:17.506 [2024-07-22 13:07:36.904851] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:45:17.765 13:07:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:45:17.765 13:07:36 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:45:17.765 13:07:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:45:17.765 13:07:36 -- common/autotest_common.sh@10 -- # set +x 00:45:17.765 Malloc0 00:45:17.765 13:07:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:45:17.765 13:07:36 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:45:17.765 13:07:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:45:17.765 13:07:36 -- common/autotest_common.sh@10 -- # set +x 00:45:17.765 13:07:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:45:17.765 13:07:36 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:45:17.765 13:07:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:45:17.765 13:07:36 -- common/autotest_common.sh@10 -- # set +x 00:45:17.765 13:07:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:45:17.765 13:07:36 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:45:17.765 13:07:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:45:17.765 13:07:36 -- common/autotest_common.sh@10 -- # set +x 00:45:17.765 [2024-07-22 13:07:36.976251] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:17.765 13:07:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:45:17.765 13:07:36 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:45:17.765 13:07:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:45:17.765 13:07:36 -- common/autotest_common.sh@10 -- # set +x 00:45:17.765 [2024-07-22 13:07:36.983981] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:45:17.765 [ 00:45:17.765 { 00:45:17.765 "allow_any_host": true, 00:45:17.765 "hosts": [], 00:45:17.765 "listen_addresses": [], 00:45:17.765 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:45:17.765 "subtype": "Discovery" 00:45:17.765 }, 00:45:17.765 { 00:45:17.765 "allow_any_host": true, 00:45:17.765 "hosts": [], 00:45:17.765 "listen_addresses": [ 00:45:17.765 { 00:45:17.765 "adrfam": "IPv4", 00:45:17.765 "traddr": "10.0.0.2", 00:45:17.765 "transport": "TCP", 00:45:17.765 "trsvcid": "4420", 00:45:17.765 "trtype": "TCP" 00:45:17.765 } 00:45:17.765 ], 00:45:17.765 "max_cntlid": 65519, 00:45:17.765 "max_namespaces": 2, 00:45:17.765 "min_cntlid": 1, 00:45:17.765 "model_number": "SPDK bdev Controller", 00:45:17.765 "namespaces": [ 00:45:17.765 { 00:45:17.765 "bdev_name": "Malloc0", 00:45:17.765 "name": "Malloc0", 00:45:17.765 "nguid": "964E5A972E2E488CAF32E3E83FF58780", 00:45:17.765 "nsid": 1, 00:45:17.765 "uuid": "964e5a97-2e2e-488c-af32-e3e83ff58780" 00:45:17.765 } 00:45:17.765 ], 00:45:17.765 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:45:17.765 "serial_number": "SPDK00000000000001", 00:45:17.765 "subtype": "NVMe" 00:45:17.765 } 00:45:17.765 ] 00:45:17.765 13:07:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:45:17.765 13:07:36 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:45:17.765 13:07:36 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:45:17.765 13:07:36 -- host/aer.sh@33 -- # aerpid=92309 00:45:17.765 13:07:36 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:45:17.765 13:07:36 -- common/autotest_common.sh@1244 -- # local i=0 00:45:17.765 13:07:36 -- host/aer.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:45:17.765 13:07:36 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:45:17.765 13:07:37 -- common/autotest_common.sh@1246 -- # '[' 0 -lt 200 ']' 00:45:17.765 13:07:37 -- common/autotest_common.sh@1247 -- # i=1 00:45:17.765 13:07:37 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:45:17.765 13:07:37 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:45:17.765 13:07:37 -- common/autotest_common.sh@1246 -- # '[' 1 -lt 200 ']' 00:45:17.765 13:07:37 -- common/autotest_common.sh@1247 -- # i=2 00:45:17.765 13:07:37 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:45:18.025 13:07:37 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:45:18.025 13:07:37 -- common/autotest_common.sh@1251 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:45:18.025 13:07:37 -- common/autotest_common.sh@1255 -- # return 0 00:45:18.025 13:07:37 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:45:18.025 13:07:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:45:18.025 13:07:37 -- common/autotest_common.sh@10 -- # set +x 00:45:18.025 Malloc1 00:45:18.025 13:07:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:45:18.025 13:07:37 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:45:18.025 13:07:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:45:18.025 13:07:37 -- common/autotest_common.sh@10 -- # set +x 00:45:18.025 13:07:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:45:18.025 13:07:37 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:45:18.025 13:07:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:45:18.025 13:07:37 -- common/autotest_common.sh@10 -- # set +x 00:45:18.025 Asynchronous Event Request test 00:45:18.025 Attaching to 10.0.0.2 00:45:18.025 Attached to 10.0.0.2 00:45:18.025 Registering asynchronous event callbacks... 00:45:18.025 Starting namespace attribute notice tests for all controllers... 00:45:18.025 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:45:18.025 aer_cb - Changed Namespace 00:45:18.025 Cleaning up... 00:45:18.025 [ 00:45:18.025 { 00:45:18.025 "allow_any_host": true, 00:45:18.025 "hosts": [], 00:45:18.025 "listen_addresses": [], 00:45:18.025 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:45:18.025 "subtype": "Discovery" 00:45:18.025 }, 00:45:18.025 { 00:45:18.025 "allow_any_host": true, 00:45:18.025 "hosts": [], 00:45:18.025 "listen_addresses": [ 00:45:18.025 { 00:45:18.025 "adrfam": "IPv4", 00:45:18.025 "traddr": "10.0.0.2", 00:45:18.025 "transport": "TCP", 00:45:18.025 "trsvcid": "4420", 00:45:18.025 "trtype": "TCP" 00:45:18.025 } 00:45:18.025 ], 00:45:18.025 "max_cntlid": 65519, 00:45:18.025 "max_namespaces": 2, 00:45:18.025 "min_cntlid": 1, 00:45:18.025 "model_number": "SPDK bdev Controller", 00:45:18.025 "namespaces": [ 00:45:18.025 { 00:45:18.025 "bdev_name": "Malloc0", 00:45:18.025 "name": "Malloc0", 00:45:18.025 "nguid": "964E5A972E2E488CAF32E3E83FF58780", 00:45:18.025 "nsid": 1, 00:45:18.025 "uuid": "964e5a97-2e2e-488c-af32-e3e83ff58780" 00:45:18.025 }, 00:45:18.025 { 00:45:18.025 "bdev_name": "Malloc1", 00:45:18.025 "name": "Malloc1", 00:45:18.025 "nguid": "B6770DCFCF9A4E4F91BE5F260409B233", 00:45:18.025 "nsid": 2, 00:45:18.025 "uuid": "b6770dcf-cf9a-4e4f-91be-5f260409b233" 00:45:18.025 } 00:45:18.025 ], 00:45:18.025 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:45:18.025 "serial_number": "SPDK00000000000001", 00:45:18.025 "subtype": "NVMe" 00:45:18.025 } 00:45:18.025 ] 00:45:18.025 13:07:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:45:18.025 13:07:37 -- host/aer.sh@43 -- # wait 92309 00:45:18.025 13:07:37 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:45:18.025 13:07:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:45:18.025 13:07:37 -- common/autotest_common.sh@10 -- # set +x 00:45:18.025 13:07:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:45:18.025 13:07:37 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:45:18.025 13:07:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:45:18.025 13:07:37 -- common/autotest_common.sh@10 -- # set +x 00:45:18.025 13:07:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:45:18.025 13:07:37 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:45:18.025 13:07:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:45:18.025 13:07:37 -- common/autotest_common.sh@10 -- # set +x 00:45:18.025 13:07:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:45:18.025 13:07:37 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:45:18.025 13:07:37 -- host/aer.sh@51 -- # nvmftestfini 00:45:18.025 13:07:37 -- nvmf/common.sh@476 -- # nvmfcleanup 00:45:18.025 13:07:37 -- nvmf/common.sh@116 -- # sync 00:45:18.025 13:07:37 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:45:18.025 13:07:37 -- nvmf/common.sh@119 -- # set +e 00:45:18.025 13:07:37 -- nvmf/common.sh@120 -- # for i in {1..20} 00:45:18.025 13:07:37 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:45:18.025 rmmod nvme_tcp 00:45:18.285 rmmod nvme_fabrics 00:45:18.285 rmmod nvme_keyring 00:45:18.285 13:07:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:45:18.285 13:07:37 -- nvmf/common.sh@123 -- # set -e 00:45:18.285 13:07:37 -- nvmf/common.sh@124 -- # return 0 00:45:18.285 13:07:37 -- nvmf/common.sh@477 -- # '[' -n 92255 ']' 00:45:18.285 13:07:37 -- nvmf/common.sh@478 -- # killprocess 92255 00:45:18.285 13:07:37 -- common/autotest_common.sh@926 -- # '[' -z 92255 ']' 00:45:18.285 13:07:37 -- common/autotest_common.sh@930 -- # kill -0 92255 00:45:18.285 13:07:37 -- common/autotest_common.sh@931 -- # uname 00:45:18.285 13:07:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:45:18.285 13:07:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 92255 00:45:18.285 13:07:37 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:45:18.285 13:07:37 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:45:18.285 killing process with pid 92255 00:45:18.285 13:07:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 92255' 00:45:18.285 13:07:37 -- common/autotest_common.sh@945 -- # kill 92255 00:45:18.285 [2024-07-22 13:07:37.511294] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:45:18.285 13:07:37 -- common/autotest_common.sh@950 -- # wait 92255 00:45:18.545 13:07:37 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:45:18.545 13:07:37 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:45:18.545 13:07:37 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:45:18.545 13:07:37 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:45:18.545 13:07:37 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:45:18.545 13:07:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:18.545 13:07:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:45:18.545 13:07:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:18.545 13:07:37 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:45:18.545 00:45:18.545 real 0m2.279s 00:45:18.545 user 0m6.413s 00:45:18.545 sys 0m0.618s 00:45:18.545 13:07:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:45:18.545 13:07:37 -- common/autotest_common.sh@10 -- # set +x 00:45:18.545 ************************************ 00:45:18.545 END TEST nvmf_aer 00:45:18.545 ************************************ 00:45:18.545 13:07:37 -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:45:18.545 13:07:37 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:45:18.545 13:07:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:45:18.545 13:07:37 -- common/autotest_common.sh@10 -- # set +x 00:45:18.545 ************************************ 00:45:18.545 START TEST nvmf_async_init 00:45:18.545 ************************************ 00:45:18.545 13:07:37 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:45:18.545 * Looking for test storage... 00:45:18.545 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:45:18.545 13:07:37 -- host/async_init.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:45:18.545 13:07:37 -- nvmf/common.sh@7 -- # uname -s 00:45:18.545 13:07:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:45:18.545 13:07:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:45:18.545 13:07:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:45:18.545 13:07:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:45:18.545 13:07:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:45:18.545 13:07:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:45:18.545 13:07:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:45:18.545 13:07:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:45:18.545 13:07:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:45:18.545 13:07:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:45:18.545 13:07:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:864ae4a3-cd96-439b-8ef2-6dab4d992115 00:45:18.545 13:07:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=864ae4a3-cd96-439b-8ef2-6dab4d992115 00:45:18.545 13:07:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:45:18.545 13:07:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:45:18.545 13:07:37 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:45:18.545 13:07:37 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:45:18.545 13:07:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:18.545 13:07:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:18.545 13:07:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:18.545 13:07:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:18.545 13:07:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:18.545 13:07:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:18.545 13:07:37 -- paths/export.sh@5 -- # export PATH 00:45:18.545 13:07:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:18.545 13:07:37 -- nvmf/common.sh@46 -- # : 0 00:45:18.545 13:07:37 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:45:18.545 13:07:37 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:45:18.545 13:07:37 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:45:18.545 13:07:37 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:45:18.545 13:07:37 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:45:18.545 13:07:37 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:45:18.545 13:07:37 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:45:18.545 13:07:37 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:45:18.545 13:07:37 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:45:18.545 13:07:37 -- host/async_init.sh@14 -- # null_block_size=512 00:45:18.545 13:07:37 -- host/async_init.sh@15 -- # null_bdev=null0 00:45:18.545 13:07:37 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:45:18.545 13:07:37 -- host/async_init.sh@20 -- # uuidgen 00:45:18.545 13:07:37 -- host/async_init.sh@20 -- # tr -d - 00:45:18.545 13:07:37 -- host/async_init.sh@20 -- # nguid=e8faf8e76e784b759dd9795ffa8e32dc 00:45:18.545 13:07:37 -- host/async_init.sh@22 -- # nvmftestinit 00:45:18.545 13:07:37 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:45:18.545 13:07:37 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:45:18.545 13:07:37 -- nvmf/common.sh@436 -- # prepare_net_devs 00:45:18.545 13:07:37 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:45:18.545 13:07:37 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:45:18.545 13:07:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:18.545 13:07:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:45:18.545 13:07:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:18.545 13:07:37 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:45:18.545 13:07:37 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:45:18.545 13:07:37 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:45:18.545 13:07:37 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:45:18.545 13:07:37 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:45:18.545 13:07:37 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:45:18.545 13:07:37 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:45:18.545 13:07:37 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:45:18.545 13:07:37 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:45:18.545 13:07:37 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:45:18.545 13:07:37 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:45:18.545 13:07:37 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:45:18.545 13:07:37 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:45:18.545 13:07:37 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:45:18.545 13:07:37 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:45:18.545 13:07:37 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:45:18.545 13:07:37 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:45:18.545 13:07:37 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:45:18.545 13:07:37 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:45:18.545 13:07:37 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:45:18.545 Cannot find device "nvmf_tgt_br" 00:45:18.546 13:07:37 -- nvmf/common.sh@154 -- # true 00:45:18.546 13:07:37 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:45:18.546 Cannot find device "nvmf_tgt_br2" 00:45:18.546 13:07:37 -- nvmf/common.sh@155 -- # true 00:45:18.546 13:07:37 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:45:18.546 13:07:37 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:45:18.804 Cannot find device "nvmf_tgt_br" 00:45:18.804 13:07:37 -- nvmf/common.sh@157 -- # true 00:45:18.804 13:07:37 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:45:18.804 Cannot find device "nvmf_tgt_br2" 00:45:18.804 13:07:37 -- nvmf/common.sh@158 -- # true 00:45:18.804 13:07:37 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:45:18.804 13:07:38 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:45:18.804 13:07:38 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:45:18.804 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:45:18.804 13:07:38 -- nvmf/common.sh@161 -- # true 00:45:18.804 13:07:38 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:45:18.804 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:45:18.804 13:07:38 -- nvmf/common.sh@162 -- # true 00:45:18.804 13:07:38 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:45:18.804 13:07:38 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:45:18.804 13:07:38 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:45:18.804 13:07:38 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:45:18.804 13:07:38 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:45:18.804 13:07:38 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:45:18.804 13:07:38 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:45:18.804 13:07:38 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:45:18.804 13:07:38 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:45:18.804 13:07:38 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:45:18.804 13:07:38 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:45:18.804 13:07:38 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:45:18.804 13:07:38 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:45:18.804 13:07:38 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:45:18.804 13:07:38 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:45:18.804 13:07:38 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:45:18.804 13:07:38 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:45:18.804 13:07:38 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:45:18.804 13:07:38 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:45:18.804 13:07:38 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:45:18.804 13:07:38 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:45:19.064 13:07:38 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:45:19.064 13:07:38 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:45:19.064 13:07:38 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:45:19.064 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:45:19.064 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:45:19.064 00:45:19.064 --- 10.0.0.2 ping statistics --- 00:45:19.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:19.064 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:45:19.064 13:07:38 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:45:19.064 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:45:19.064 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:45:19.064 00:45:19.064 --- 10.0.0.3 ping statistics --- 00:45:19.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:19.064 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:45:19.064 13:07:38 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:45:19.064 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:45:19.064 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:45:19.064 00:45:19.064 --- 10.0.0.1 ping statistics --- 00:45:19.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:19.064 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:45:19.064 13:07:38 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:45:19.064 13:07:38 -- nvmf/common.sh@421 -- # return 0 00:45:19.064 13:07:38 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:45:19.064 13:07:38 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:45:19.064 13:07:38 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:45:19.064 13:07:38 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:45:19.064 13:07:38 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:45:19.064 13:07:38 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:45:19.064 13:07:38 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:45:19.064 13:07:38 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:45:19.064 13:07:38 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:45:19.064 13:07:38 -- common/autotest_common.sh@712 -- # xtrace_disable 00:45:19.064 13:07:38 -- common/autotest_common.sh@10 -- # set +x 00:45:19.064 13:07:38 -- nvmf/common.sh@469 -- # nvmfpid=92484 00:45:19.064 13:07:38 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:45:19.064 13:07:38 -- nvmf/common.sh@470 -- # waitforlisten 92484 00:45:19.064 13:07:38 -- common/autotest_common.sh@819 -- # '[' -z 92484 ']' 00:45:19.064 13:07:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:19.064 13:07:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:45:19.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:19.064 13:07:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:19.064 13:07:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:45:19.064 13:07:38 -- common/autotest_common.sh@10 -- # set +x 00:45:19.064 [2024-07-22 13:07:38.328336] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:45:19.064 [2024-07-22 13:07:38.328432] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:45:19.064 [2024-07-22 13:07:38.467826] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:19.323 [2024-07-22 13:07:38.533616] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:45:19.323 [2024-07-22 13:07:38.533785] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:45:19.323 [2024-07-22 13:07:38.533797] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:45:19.323 [2024-07-22 13:07:38.533805] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:45:19.323 [2024-07-22 13:07:38.533834] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:45:19.891 13:07:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:45:19.891 13:07:39 -- common/autotest_common.sh@852 -- # return 0 00:45:19.891 13:07:39 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:45:19.891 13:07:39 -- common/autotest_common.sh@718 -- # xtrace_disable 00:45:19.891 13:07:39 -- common/autotest_common.sh@10 -- # set +x 00:45:19.891 13:07:39 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:45:19.891 13:07:39 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:45:19.891 13:07:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:45:19.891 13:07:39 -- common/autotest_common.sh@10 -- # set +x 00:45:19.891 [2024-07-22 13:07:39.310950] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:45:20.150 13:07:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:45:20.150 13:07:39 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:45:20.150 13:07:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:45:20.150 13:07:39 -- common/autotest_common.sh@10 -- # set +x 00:45:20.150 null0 00:45:20.150 13:07:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:45:20.150 13:07:39 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:45:20.150 13:07:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:45:20.150 13:07:39 -- common/autotest_common.sh@10 -- # set +x 00:45:20.150 13:07:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:45:20.150 13:07:39 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:45:20.150 13:07:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:45:20.150 13:07:39 -- common/autotest_common.sh@10 -- # set +x 00:45:20.150 13:07:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:45:20.150 13:07:39 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g e8faf8e76e784b759dd9795ffa8e32dc 00:45:20.150 13:07:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:45:20.150 13:07:39 -- common/autotest_common.sh@10 -- # set +x 00:45:20.150 13:07:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:45:20.150 13:07:39 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:45:20.150 13:07:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:45:20.150 13:07:39 -- common/autotest_common.sh@10 -- # set +x 00:45:20.150 [2024-07-22 13:07:39.351080] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:20.151 13:07:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:45:20.151 13:07:39 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:45:20.151 13:07:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:45:20.151 13:07:39 -- common/autotest_common.sh@10 -- # set +x 00:45:20.410 nvme0n1 00:45:20.410 13:07:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:45:20.410 13:07:39 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:45:20.410 13:07:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:45:20.410 13:07:39 -- common/autotest_common.sh@10 -- # set +x 00:45:20.410 [ 00:45:20.410 { 00:45:20.410 "aliases": [ 00:45:20.410 "e8faf8e7-6e78-4b75-9dd9-795ffa8e32dc" 00:45:20.410 ], 00:45:20.410 "assigned_rate_limits": { 00:45:20.410 "r_mbytes_per_sec": 0, 00:45:20.410 "rw_ios_per_sec": 0, 00:45:20.410 "rw_mbytes_per_sec": 0, 00:45:20.410 "w_mbytes_per_sec": 0 00:45:20.410 }, 00:45:20.410 "block_size": 512, 00:45:20.410 "claimed": false, 00:45:20.410 "driver_specific": { 00:45:20.410 "mp_policy": "active_passive", 00:45:20.410 "nvme": [ 00:45:20.410 { 00:45:20.410 "ctrlr_data": { 00:45:20.410 "ana_reporting": false, 00:45:20.410 "cntlid": 1, 00:45:20.410 "firmware_revision": "24.01.1", 00:45:20.410 "model_number": "SPDK bdev Controller", 00:45:20.410 "multi_ctrlr": true, 00:45:20.410 "oacs": { 00:45:20.410 "firmware": 0, 00:45:20.410 "format": 0, 00:45:20.410 "ns_manage": 0, 00:45:20.410 "security": 0 00:45:20.410 }, 00:45:20.410 "serial_number": "00000000000000000000", 00:45:20.410 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:20.410 "vendor_id": "0x8086" 00:45:20.410 }, 00:45:20.410 "ns_data": { 00:45:20.410 "can_share": true, 00:45:20.410 "id": 1 00:45:20.410 }, 00:45:20.410 "trid": { 00:45:20.410 "adrfam": "IPv4", 00:45:20.410 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:20.410 "traddr": "10.0.0.2", 00:45:20.410 "trsvcid": "4420", 00:45:20.410 "trtype": "TCP" 00:45:20.410 }, 00:45:20.410 "vs": { 00:45:20.410 "nvme_version": "1.3" 00:45:20.410 } 00:45:20.410 } 00:45:20.410 ] 00:45:20.410 }, 00:45:20.410 "name": "nvme0n1", 00:45:20.410 "num_blocks": 2097152, 00:45:20.410 "product_name": "NVMe disk", 00:45:20.410 "supported_io_types": { 00:45:20.410 "abort": true, 00:45:20.410 "compare": true, 00:45:20.410 "compare_and_write": true, 00:45:20.410 "flush": true, 00:45:20.410 "nvme_admin": true, 00:45:20.410 "nvme_io": true, 00:45:20.410 "read": true, 00:45:20.410 "reset": true, 00:45:20.410 "unmap": false, 00:45:20.410 "write": true, 00:45:20.410 "write_zeroes": true 00:45:20.410 }, 00:45:20.410 "uuid": "e8faf8e7-6e78-4b75-9dd9-795ffa8e32dc", 00:45:20.410 "zoned": false 00:45:20.410 } 00:45:20.410 ] 00:45:20.410 13:07:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:45:20.410 13:07:39 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:45:20.410 13:07:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:45:20.410 13:07:39 -- common/autotest_common.sh@10 -- # set +x 00:45:20.410 [2024-07-22 13:07:39.614994] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:45:20.410 [2024-07-22 13:07:39.615131] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d16c20 (9): Bad file descriptor 00:45:20.410 [2024-07-22 13:07:39.747303] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:45:20.410 13:07:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:45:20.410 13:07:39 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:45:20.410 13:07:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:45:20.410 13:07:39 -- common/autotest_common.sh@10 -- # set +x 00:45:20.410 [ 00:45:20.410 { 00:45:20.410 "aliases": [ 00:45:20.410 "e8faf8e7-6e78-4b75-9dd9-795ffa8e32dc" 00:45:20.410 ], 00:45:20.410 "assigned_rate_limits": { 00:45:20.410 "r_mbytes_per_sec": 0, 00:45:20.410 "rw_ios_per_sec": 0, 00:45:20.410 "rw_mbytes_per_sec": 0, 00:45:20.410 "w_mbytes_per_sec": 0 00:45:20.410 }, 00:45:20.410 "block_size": 512, 00:45:20.410 "claimed": false, 00:45:20.410 "driver_specific": { 00:45:20.410 "mp_policy": "active_passive", 00:45:20.410 "nvme": [ 00:45:20.410 { 00:45:20.410 "ctrlr_data": { 00:45:20.410 "ana_reporting": false, 00:45:20.410 "cntlid": 2, 00:45:20.410 "firmware_revision": "24.01.1", 00:45:20.410 "model_number": "SPDK bdev Controller", 00:45:20.410 "multi_ctrlr": true, 00:45:20.410 "oacs": { 00:45:20.410 "firmware": 0, 00:45:20.410 "format": 0, 00:45:20.410 "ns_manage": 0, 00:45:20.410 "security": 0 00:45:20.410 }, 00:45:20.410 "serial_number": "00000000000000000000", 00:45:20.410 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:20.410 "vendor_id": "0x8086" 00:45:20.410 }, 00:45:20.410 "ns_data": { 00:45:20.410 "can_share": true, 00:45:20.410 "id": 1 00:45:20.410 }, 00:45:20.410 "trid": { 00:45:20.410 "adrfam": "IPv4", 00:45:20.410 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:20.410 "traddr": "10.0.0.2", 00:45:20.410 "trsvcid": "4420", 00:45:20.410 "trtype": "TCP" 00:45:20.410 }, 00:45:20.411 "vs": { 00:45:20.411 "nvme_version": "1.3" 00:45:20.411 } 00:45:20.411 } 00:45:20.411 ] 00:45:20.411 }, 00:45:20.411 "name": "nvme0n1", 00:45:20.411 "num_blocks": 2097152, 00:45:20.411 "product_name": "NVMe disk", 00:45:20.411 "supported_io_types": { 00:45:20.411 "abort": true, 00:45:20.411 "compare": true, 00:45:20.411 "compare_and_write": true, 00:45:20.411 "flush": true, 00:45:20.411 "nvme_admin": true, 00:45:20.411 "nvme_io": true, 00:45:20.411 "read": true, 00:45:20.411 "reset": true, 00:45:20.411 "unmap": false, 00:45:20.411 "write": true, 00:45:20.411 "write_zeroes": true 00:45:20.411 }, 00:45:20.411 "uuid": "e8faf8e7-6e78-4b75-9dd9-795ffa8e32dc", 00:45:20.411 "zoned": false 00:45:20.411 } 00:45:20.411 ] 00:45:20.411 13:07:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:45:20.411 13:07:39 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:45:20.411 13:07:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:45:20.411 13:07:39 -- common/autotest_common.sh@10 -- # set +x 00:45:20.411 13:07:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:45:20.411 13:07:39 -- host/async_init.sh@53 -- # mktemp 00:45:20.411 13:07:39 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.Q3caxqDG19 00:45:20.411 13:07:39 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:45:20.411 13:07:39 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.Q3caxqDG19 00:45:20.411 13:07:39 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:45:20.411 13:07:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:45:20.411 13:07:39 -- common/autotest_common.sh@10 -- # set +x 00:45:20.411 13:07:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:45:20.411 13:07:39 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:45:20.411 13:07:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:45:20.411 13:07:39 -- common/autotest_common.sh@10 -- # set +x 00:45:20.411 [2024-07-22 13:07:39.815191] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:45:20.411 [2024-07-22 13:07:39.815352] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:45:20.411 13:07:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:45:20.411 13:07:39 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Q3caxqDG19 00:45:20.411 13:07:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:45:20.411 13:07:39 -- common/autotest_common.sh@10 -- # set +x 00:45:20.411 13:07:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:45:20.411 13:07:39 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Q3caxqDG19 00:45:20.411 13:07:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:45:20.411 13:07:39 -- common/autotest_common.sh@10 -- # set +x 00:45:20.411 [2024-07-22 13:07:39.831176] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:45:20.671 nvme0n1 00:45:20.671 13:07:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:45:20.671 13:07:39 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:45:20.671 13:07:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:45:20.671 13:07:39 -- common/autotest_common.sh@10 -- # set +x 00:45:20.671 [ 00:45:20.671 { 00:45:20.671 "aliases": [ 00:45:20.671 "e8faf8e7-6e78-4b75-9dd9-795ffa8e32dc" 00:45:20.671 ], 00:45:20.671 "assigned_rate_limits": { 00:45:20.671 "r_mbytes_per_sec": 0, 00:45:20.671 "rw_ios_per_sec": 0, 00:45:20.671 "rw_mbytes_per_sec": 0, 00:45:20.671 "w_mbytes_per_sec": 0 00:45:20.671 }, 00:45:20.671 "block_size": 512, 00:45:20.671 "claimed": false, 00:45:20.671 "driver_specific": { 00:45:20.671 "mp_policy": "active_passive", 00:45:20.671 "nvme": [ 00:45:20.671 { 00:45:20.671 "ctrlr_data": { 00:45:20.671 "ana_reporting": false, 00:45:20.671 "cntlid": 3, 00:45:20.671 "firmware_revision": "24.01.1", 00:45:20.671 "model_number": "SPDK bdev Controller", 00:45:20.671 "multi_ctrlr": true, 00:45:20.671 "oacs": { 00:45:20.671 "firmware": 0, 00:45:20.671 "format": 0, 00:45:20.671 "ns_manage": 0, 00:45:20.671 "security": 0 00:45:20.671 }, 00:45:20.671 "serial_number": "00000000000000000000", 00:45:20.671 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:20.671 "vendor_id": "0x8086" 00:45:20.671 }, 00:45:20.671 "ns_data": { 00:45:20.671 "can_share": true, 00:45:20.671 "id": 1 00:45:20.671 }, 00:45:20.671 "trid": { 00:45:20.671 "adrfam": "IPv4", 00:45:20.671 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:20.671 "traddr": "10.0.0.2", 00:45:20.671 "trsvcid": "4421", 00:45:20.671 "trtype": "TCP" 00:45:20.671 }, 00:45:20.671 "vs": { 00:45:20.671 "nvme_version": "1.3" 00:45:20.671 } 00:45:20.671 } 00:45:20.671 ] 00:45:20.671 }, 00:45:20.671 "name": "nvme0n1", 00:45:20.671 "num_blocks": 2097152, 00:45:20.671 "product_name": "NVMe disk", 00:45:20.671 "supported_io_types": { 00:45:20.671 "abort": true, 00:45:20.671 "compare": true, 00:45:20.671 "compare_and_write": true, 00:45:20.671 "flush": true, 00:45:20.671 "nvme_admin": true, 00:45:20.671 "nvme_io": true, 00:45:20.671 "read": true, 00:45:20.671 "reset": true, 00:45:20.671 "unmap": false, 00:45:20.671 "write": true, 00:45:20.671 "write_zeroes": true 00:45:20.671 }, 00:45:20.671 "uuid": "e8faf8e7-6e78-4b75-9dd9-795ffa8e32dc", 00:45:20.671 "zoned": false 00:45:20.671 } 00:45:20.671 ] 00:45:20.671 13:07:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:45:20.671 13:07:39 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:45:20.671 13:07:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:45:20.671 13:07:39 -- common/autotest_common.sh@10 -- # set +x 00:45:20.671 13:07:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:45:20.671 13:07:39 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.Q3caxqDG19 00:45:20.671 13:07:39 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:45:20.671 13:07:39 -- host/async_init.sh@78 -- # nvmftestfini 00:45:20.671 13:07:39 -- nvmf/common.sh@476 -- # nvmfcleanup 00:45:20.671 13:07:39 -- nvmf/common.sh@116 -- # sync 00:45:20.671 13:07:39 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:45:20.671 13:07:39 -- nvmf/common.sh@119 -- # set +e 00:45:20.671 13:07:39 -- nvmf/common.sh@120 -- # for i in {1..20} 00:45:20.671 13:07:39 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:45:20.671 rmmod nvme_tcp 00:45:20.671 rmmod nvme_fabrics 00:45:20.671 rmmod nvme_keyring 00:45:20.671 13:07:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:45:20.671 13:07:40 -- nvmf/common.sh@123 -- # set -e 00:45:20.671 13:07:40 -- nvmf/common.sh@124 -- # return 0 00:45:20.671 13:07:40 -- nvmf/common.sh@477 -- # '[' -n 92484 ']' 00:45:20.671 13:07:40 -- nvmf/common.sh@478 -- # killprocess 92484 00:45:20.671 13:07:40 -- common/autotest_common.sh@926 -- # '[' -z 92484 ']' 00:45:20.671 13:07:40 -- common/autotest_common.sh@930 -- # kill -0 92484 00:45:20.671 13:07:40 -- common/autotest_common.sh@931 -- # uname 00:45:20.671 13:07:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:45:20.671 13:07:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 92484 00:45:20.671 killing process with pid 92484 00:45:20.671 13:07:40 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:45:20.671 13:07:40 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:45:20.671 13:07:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 92484' 00:45:20.671 13:07:40 -- common/autotest_common.sh@945 -- # kill 92484 00:45:20.671 13:07:40 -- common/autotest_common.sh@950 -- # wait 92484 00:45:20.931 13:07:40 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:45:20.931 13:07:40 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:45:20.931 13:07:40 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:45:20.931 13:07:40 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:45:20.931 13:07:40 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:45:20.931 13:07:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:20.931 13:07:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:45:20.931 13:07:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:20.931 13:07:40 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:45:20.931 00:45:20.931 real 0m2.505s 00:45:20.931 user 0m2.282s 00:45:20.931 sys 0m0.602s 00:45:20.931 13:07:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:45:20.931 13:07:40 -- common/autotest_common.sh@10 -- # set +x 00:45:20.931 ************************************ 00:45:20.931 END TEST nvmf_async_init 00:45:20.931 ************************************ 00:45:20.931 13:07:40 -- nvmf/nvmf.sh@94 -- # run_test dma /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:45:20.931 13:07:40 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:45:20.931 13:07:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:45:20.931 13:07:40 -- common/autotest_common.sh@10 -- # set +x 00:45:20.931 ************************************ 00:45:20.931 START TEST dma 00:45:20.931 ************************************ 00:45:20.931 13:07:40 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:45:21.191 * Looking for test storage... 00:45:21.191 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:45:21.191 13:07:40 -- host/dma.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:45:21.191 13:07:40 -- nvmf/common.sh@7 -- # uname -s 00:45:21.191 13:07:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:45:21.191 13:07:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:45:21.191 13:07:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:45:21.191 13:07:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:45:21.191 13:07:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:45:21.191 13:07:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:45:21.191 13:07:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:45:21.191 13:07:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:45:21.191 13:07:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:45:21.191 13:07:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:45:21.191 13:07:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:864ae4a3-cd96-439b-8ef2-6dab4d992115 00:45:21.191 13:07:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=864ae4a3-cd96-439b-8ef2-6dab4d992115 00:45:21.191 13:07:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:45:21.191 13:07:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:45:21.191 13:07:40 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:45:21.191 13:07:40 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:45:21.191 13:07:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:21.191 13:07:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:21.191 13:07:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:21.191 13:07:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:21.191 13:07:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:21.191 13:07:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:21.191 13:07:40 -- paths/export.sh@5 -- # export PATH 00:45:21.191 13:07:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:21.191 13:07:40 -- nvmf/common.sh@46 -- # : 0 00:45:21.191 13:07:40 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:45:21.191 13:07:40 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:45:21.191 13:07:40 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:45:21.191 13:07:40 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:45:21.191 13:07:40 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:45:21.191 13:07:40 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:45:21.191 13:07:40 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:45:21.191 13:07:40 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:45:21.191 13:07:40 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:45:21.191 13:07:40 -- host/dma.sh@13 -- # exit 0 00:45:21.191 00:45:21.191 real 0m0.078s 00:45:21.191 user 0m0.038s 00:45:21.191 sys 0m0.047s 00:45:21.191 13:07:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:45:21.191 13:07:40 -- common/autotest_common.sh@10 -- # set +x 00:45:21.191 ************************************ 00:45:21.191 END TEST dma 00:45:21.191 ************************************ 00:45:21.191 13:07:40 -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:45:21.191 13:07:40 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:45:21.191 13:07:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:45:21.191 13:07:40 -- common/autotest_common.sh@10 -- # set +x 00:45:21.191 ************************************ 00:45:21.191 START TEST nvmf_identify 00:45:21.191 ************************************ 00:45:21.191 13:07:40 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:45:21.191 * Looking for test storage... 00:45:21.191 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:45:21.191 13:07:40 -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:45:21.191 13:07:40 -- nvmf/common.sh@7 -- # uname -s 00:45:21.191 13:07:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:45:21.191 13:07:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:45:21.192 13:07:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:45:21.192 13:07:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:45:21.192 13:07:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:45:21.192 13:07:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:45:21.192 13:07:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:45:21.192 13:07:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:45:21.192 13:07:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:45:21.192 13:07:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:45:21.192 13:07:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:864ae4a3-cd96-439b-8ef2-6dab4d992115 00:45:21.192 13:07:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=864ae4a3-cd96-439b-8ef2-6dab4d992115 00:45:21.192 13:07:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:45:21.192 13:07:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:45:21.192 13:07:40 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:45:21.192 13:07:40 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:45:21.192 13:07:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:21.192 13:07:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:21.192 13:07:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:21.192 13:07:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:21.192 13:07:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:21.192 13:07:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:21.192 13:07:40 -- paths/export.sh@5 -- # export PATH 00:45:21.192 13:07:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:21.192 13:07:40 -- nvmf/common.sh@46 -- # : 0 00:45:21.192 13:07:40 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:45:21.192 13:07:40 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:45:21.192 13:07:40 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:45:21.192 13:07:40 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:45:21.192 13:07:40 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:45:21.192 13:07:40 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:45:21.192 13:07:40 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:45:21.192 13:07:40 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:45:21.192 13:07:40 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:45:21.192 13:07:40 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:45:21.192 13:07:40 -- host/identify.sh@14 -- # nvmftestinit 00:45:21.192 13:07:40 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:45:21.192 13:07:40 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:45:21.192 13:07:40 -- nvmf/common.sh@436 -- # prepare_net_devs 00:45:21.192 13:07:40 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:45:21.192 13:07:40 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:45:21.192 13:07:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:21.192 13:07:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:45:21.192 13:07:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:21.192 13:07:40 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:45:21.192 13:07:40 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:45:21.192 13:07:40 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:45:21.192 13:07:40 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:45:21.192 13:07:40 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:45:21.192 13:07:40 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:45:21.192 13:07:40 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:45:21.192 13:07:40 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:45:21.192 13:07:40 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:45:21.192 13:07:40 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:45:21.192 13:07:40 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:45:21.192 13:07:40 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:45:21.192 13:07:40 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:45:21.192 13:07:40 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:45:21.192 13:07:40 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:45:21.192 13:07:40 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:45:21.192 13:07:40 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:45:21.192 13:07:40 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:45:21.192 13:07:40 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:45:21.192 13:07:40 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:45:21.490 Cannot find device "nvmf_tgt_br" 00:45:21.490 13:07:40 -- nvmf/common.sh@154 -- # true 00:45:21.490 13:07:40 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:45:21.490 Cannot find device "nvmf_tgt_br2" 00:45:21.490 13:07:40 -- nvmf/common.sh@155 -- # true 00:45:21.490 13:07:40 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:45:21.490 13:07:40 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:45:21.490 Cannot find device "nvmf_tgt_br" 00:45:21.490 13:07:40 -- nvmf/common.sh@157 -- # true 00:45:21.490 13:07:40 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:45:21.490 Cannot find device "nvmf_tgt_br2" 00:45:21.490 13:07:40 -- nvmf/common.sh@158 -- # true 00:45:21.490 13:07:40 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:45:21.490 13:07:40 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:45:21.490 13:07:40 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:45:21.490 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:45:21.490 13:07:40 -- nvmf/common.sh@161 -- # true 00:45:21.490 13:07:40 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:45:21.490 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:45:21.490 13:07:40 -- nvmf/common.sh@162 -- # true 00:45:21.490 13:07:40 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:45:21.490 13:07:40 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:45:21.490 13:07:40 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:45:21.490 13:07:40 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:45:21.490 13:07:40 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:45:21.490 13:07:40 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:45:21.490 13:07:40 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:45:21.490 13:07:40 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:45:21.490 13:07:40 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:45:21.490 13:07:40 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:45:21.490 13:07:40 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:45:21.490 13:07:40 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:45:21.490 13:07:40 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:45:21.490 13:07:40 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:45:21.490 13:07:40 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:45:21.490 13:07:40 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:45:21.490 13:07:40 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:45:21.490 13:07:40 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:45:21.490 13:07:40 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:45:21.490 13:07:40 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:45:21.490 13:07:40 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:45:21.779 13:07:40 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:45:21.779 13:07:40 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:45:21.779 13:07:40 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:45:21.779 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:45:21.779 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:45:21.779 00:45:21.779 --- 10.0.0.2 ping statistics --- 00:45:21.779 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:21.779 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:45:21.779 13:07:40 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:45:21.779 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:45:21.779 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:45:21.779 00:45:21.779 --- 10.0.0.3 ping statistics --- 00:45:21.779 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:21.779 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:45:21.779 13:07:40 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:45:21.779 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:45:21.779 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:45:21.779 00:45:21.779 --- 10.0.0.1 ping statistics --- 00:45:21.779 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:21.779 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:45:21.779 13:07:40 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:45:21.779 13:07:40 -- nvmf/common.sh@421 -- # return 0 00:45:21.779 13:07:40 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:45:21.779 13:07:40 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:45:21.779 13:07:40 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:45:21.779 13:07:40 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:45:21.779 13:07:40 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:45:21.779 13:07:40 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:45:21.779 13:07:40 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:45:21.779 13:07:40 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:45:21.779 13:07:40 -- common/autotest_common.sh@712 -- # xtrace_disable 00:45:21.779 13:07:40 -- common/autotest_common.sh@10 -- # set +x 00:45:21.779 13:07:40 -- host/identify.sh@19 -- # nvmfpid=92744 00:45:21.779 13:07:40 -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:45:21.779 13:07:40 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:45:21.780 13:07:40 -- host/identify.sh@23 -- # waitforlisten 92744 00:45:21.780 13:07:40 -- common/autotest_common.sh@819 -- # '[' -z 92744 ']' 00:45:21.780 13:07:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:21.780 13:07:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:45:21.780 13:07:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:21.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:21.780 13:07:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:45:21.780 13:07:40 -- common/autotest_common.sh@10 -- # set +x 00:45:21.780 [2024-07-22 13:07:40.988810] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:45:21.780 [2024-07-22 13:07:40.988872] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:45:21.780 [2024-07-22 13:07:41.125263] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:45:21.780 [2024-07-22 13:07:41.190847] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:45:21.780 [2024-07-22 13:07:41.191535] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:45:21.780 [2024-07-22 13:07:41.191822] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:45:21.780 [2024-07-22 13:07:41.192055] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:45:21.780 [2024-07-22 13:07:41.192407] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:45:21.780 [2024-07-22 13:07:41.192496] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:45:21.780 [2024-07-22 13:07:41.192663] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:45:21.780 [2024-07-22 13:07:41.192667] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:45:22.719 13:07:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:45:22.719 13:07:41 -- common/autotest_common.sh@852 -- # return 0 00:45:22.719 13:07:41 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:45:22.719 13:07:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:45:22.719 13:07:41 -- common/autotest_common.sh@10 -- # set +x 00:45:22.719 [2024-07-22 13:07:41.987547] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:45:22.719 13:07:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:45:22.719 13:07:42 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:45:22.719 13:07:42 -- common/autotest_common.sh@718 -- # xtrace_disable 00:45:22.719 13:07:42 -- common/autotest_common.sh@10 -- # set +x 00:45:22.719 13:07:42 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:45:22.719 13:07:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:45:22.719 13:07:42 -- common/autotest_common.sh@10 -- # set +x 00:45:22.719 Malloc0 00:45:22.719 13:07:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:45:22.719 13:07:42 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:45:22.719 13:07:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:45:22.719 13:07:42 -- common/autotest_common.sh@10 -- # set +x 00:45:22.719 13:07:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:45:22.719 13:07:42 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:45:22.719 13:07:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:45:22.719 13:07:42 -- common/autotest_common.sh@10 -- # set +x 00:45:22.719 13:07:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:45:22.719 13:07:42 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:45:22.719 13:07:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:45:22.719 13:07:42 -- common/autotest_common.sh@10 -- # set +x 00:45:22.719 [2024-07-22 13:07:42.093096] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:22.719 13:07:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:45:22.719 13:07:42 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:45:22.719 13:07:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:45:22.719 13:07:42 -- common/autotest_common.sh@10 -- # set +x 00:45:22.719 13:07:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:45:22.719 13:07:42 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:45:22.719 13:07:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:45:22.719 13:07:42 -- common/autotest_common.sh@10 -- # set +x 00:45:22.719 [2024-07-22 13:07:42.108881] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:45:22.719 [ 00:45:22.719 { 00:45:22.719 "allow_any_host": true, 00:45:22.719 "hosts": [], 00:45:22.719 "listen_addresses": [ 00:45:22.719 { 00:45:22.719 "adrfam": "IPv4", 00:45:22.719 "traddr": "10.0.0.2", 00:45:22.719 "transport": "TCP", 00:45:22.719 "trsvcid": "4420", 00:45:22.719 "trtype": "TCP" 00:45:22.719 } 00:45:22.719 ], 00:45:22.719 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:45:22.719 "subtype": "Discovery" 00:45:22.719 }, 00:45:22.719 { 00:45:22.719 "allow_any_host": true, 00:45:22.719 "hosts": [], 00:45:22.719 "listen_addresses": [ 00:45:22.719 { 00:45:22.719 "adrfam": "IPv4", 00:45:22.719 "traddr": "10.0.0.2", 00:45:22.719 "transport": "TCP", 00:45:22.719 "trsvcid": "4420", 00:45:22.719 "trtype": "TCP" 00:45:22.719 } 00:45:22.719 ], 00:45:22.719 "max_cntlid": 65519, 00:45:22.719 "max_namespaces": 32, 00:45:22.719 "min_cntlid": 1, 00:45:22.719 "model_number": "SPDK bdev Controller", 00:45:22.719 "namespaces": [ 00:45:22.719 { 00:45:22.719 "bdev_name": "Malloc0", 00:45:22.719 "eui64": "ABCDEF0123456789", 00:45:22.719 "name": "Malloc0", 00:45:22.719 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:45:22.719 "nsid": 1, 00:45:22.719 "uuid": "cfd6ad5a-4a5e-47ea-a9ea-51420b744d38" 00:45:22.719 } 00:45:22.719 ], 00:45:22.719 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:45:22.719 "serial_number": "SPDK00000000000001", 00:45:22.719 "subtype": "NVMe" 00:45:22.719 } 00:45:22.719 ] 00:45:22.719 13:07:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:45:22.719 13:07:42 -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:45:22.991 [2024-07-22 13:07:42.146274] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:45:22.991 [2024-07-22 13:07:42.146314] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92797 ] 00:45:22.991 [2024-07-22 13:07:42.279255] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:45:22.991 [2024-07-22 13:07:42.279326] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:45:22.991 [2024-07-22 13:07:42.279333] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:45:22.992 [2024-07-22 13:07:42.279344] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:45:22.992 [2024-07-22 13:07:42.279353] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:45:22.992 [2024-07-22 13:07:42.279477] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:45:22.992 [2024-07-22 13:07:42.279566] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x15116c0 0 00:45:22.992 [2024-07-22 13:07:42.292184] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:45:22.992 [2024-07-22 13:07:42.292206] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:45:22.992 [2024-07-22 13:07:42.292227] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:45:22.992 [2024-07-22 13:07:42.292231] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:45:22.992 [2024-07-22 13:07:42.292275] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:22.992 [2024-07-22 13:07:42.292282] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:22.992 [2024-07-22 13:07:42.292286] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15116c0) 00:45:22.992 [2024-07-22 13:07:42.292298] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:45:22.992 [2024-07-22 13:07:42.292336] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1547f60, cid 0, qid 0 00:45:22.992 [2024-07-22 13:07:42.303197] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:45:22.992 [2024-07-22 13:07:42.303216] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:45:22.992 [2024-07-22 13:07:42.303237] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:45:22.992 [2024-07-22 13:07:42.303243] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1547f60) on tqpair=0x15116c0 00:45:22.992 [2024-07-22 13:07:42.303254] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:45:22.992 [2024-07-22 13:07:42.303262] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:45:22.992 [2024-07-22 13:07:42.303268] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:45:22.992 [2024-07-22 13:07:42.303283] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:22.992 [2024-07-22 13:07:42.303288] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:22.992 [2024-07-22 13:07:42.303292] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15116c0) 00:45:22.992 [2024-07-22 13:07:42.303301] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:22.992 [2024-07-22 13:07:42.303330] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1547f60, cid 0, qid 0 00:45:22.992 [2024-07-22 13:07:42.303401] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:45:22.992 [2024-07-22 13:07:42.303408] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:45:22.992 [2024-07-22 13:07:42.303411] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:45:22.992 [2024-07-22 13:07:42.303415] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1547f60) on tqpair=0x15116c0 00:45:22.992 [2024-07-22 13:07:42.303422] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:45:22.992 [2024-07-22 13:07:42.303430] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:45:22.992 [2024-07-22 13:07:42.303437] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:22.992 [2024-07-22 13:07:42.303441] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:22.992 [2024-07-22 13:07:42.303460] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15116c0) 00:45:22.992 [2024-07-22 13:07:42.303467] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:22.992 [2024-07-22 13:07:42.303503] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1547f60, cid 0, qid 0 00:45:22.992 [2024-07-22 13:07:42.303558] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:45:22.992 [2024-07-22 13:07:42.303564] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:45:22.992 [2024-07-22 13:07:42.303568] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:45:22.992 [2024-07-22 13:07:42.303572] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1547f60) on tqpair=0x15116c0 00:45:22.992 [2024-07-22 13:07:42.303579] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:45:22.992 [2024-07-22 13:07:42.303588] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:45:22.992 [2024-07-22 13:07:42.303595] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:22.992 [2024-07-22 13:07:42.303599] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:22.992 [2024-07-22 13:07:42.303603] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15116c0) 00:45:22.992 [2024-07-22 13:07:42.303611] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:22.992 [2024-07-22 13:07:42.303629] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1547f60, cid 0, qid 0 00:45:22.992 [2024-07-22 13:07:42.303686] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:45:22.992 [2024-07-22 13:07:42.303693] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:45:22.992 [2024-07-22 13:07:42.303697] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:45:22.992 [2024-07-22 13:07:42.303701] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1547f60) on tqpair=0x15116c0 00:45:22.992 [2024-07-22 13:07:42.303708] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:45:22.992 [2024-07-22 13:07:42.303718] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:22.992 [2024-07-22 13:07:42.303723] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:22.992 [2024-07-22 13:07:42.303727] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15116c0) 00:45:22.992 [2024-07-22 13:07:42.303734] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:22.992 [2024-07-22 13:07:42.303751] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1547f60, cid 0, qid 0 00:45:22.992 [2024-07-22 13:07:42.303815] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:45:22.992 [2024-07-22 13:07:42.303821] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:45:22.992 [2024-07-22 13:07:42.303825] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:45:22.992 [2024-07-22 13:07:42.303831] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1547f60) on tqpair=0x15116c0 00:45:22.992 [2024-07-22 13:07:42.303837] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:45:22.992 [2024-07-22 13:07:42.303843] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:45:22.992 [2024-07-22 13:07:42.303851] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:45:22.992 [2024-07-22 13:07:42.303987] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:45:22.992 [2024-07-22 13:07:42.303993] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:45:22.992 [2024-07-22 13:07:42.304002] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:22.992 [2024-07-22 13:07:42.304006] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:22.992 [2024-07-22 13:07:42.304010] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15116c0) 00:45:22.992 [2024-07-22 13:07:42.304017] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:22.992 [2024-07-22 13:07:42.304036] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1547f60, cid 0, qid 0 00:45:22.992 [2024-07-22 13:07:42.304091] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:45:22.992 [2024-07-22 13:07:42.304097] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:45:22.992 [2024-07-22 13:07:42.304101] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:45:22.992 [2024-07-22 13:07:42.304105] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1547f60) on tqpair=0x15116c0 00:45:22.992 [2024-07-22 13:07:42.304112] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:45:22.992 [2024-07-22 13:07:42.304122] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:22.992 [2024-07-22 13:07:42.304126] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:22.992 [2024-07-22 13:07:42.304130] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15116c0) 00:45:22.992 [2024-07-22 13:07:42.304137] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:22.992 [2024-07-22 13:07:42.304155] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1547f60, cid 0, qid 0 00:45:22.992 [2024-07-22 13:07:42.304232] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:45:22.992 [2024-07-22 13:07:42.304258] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:45:22.992 [2024-07-22 13:07:42.304262] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:45:22.992 [2024-07-22 13:07:42.304266] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1547f60) on tqpair=0x15116c0 00:45:22.992 [2024-07-22 13:07:42.304273] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:45:22.992 [2024-07-22 13:07:42.304278] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:45:22.992 [2024-07-22 13:07:42.304287] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:45:22.992 [2024-07-22 13:07:42.304311] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:45:22.992 [2024-07-22 13:07:42.304321] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:22.992 [2024-07-22 13:07:42.304326] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:22.992 [2024-07-22 13:07:42.304330] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15116c0) 00:45:22.993 [2024-07-22 13:07:42.304340] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:22.993 [2024-07-22 13:07:42.304363] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1547f60, cid 0, qid 0 00:45:22.993 [2024-07-22 13:07:42.304497] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:45:22.993 [2024-07-22 13:07:42.304505] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:45:22.993 [2024-07-22 13:07:42.304509] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:45:22.993 [2024-07-22 13:07:42.304513] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15116c0): datao=0, datal=4096, cccid=0 00:45:22.993 [2024-07-22 13:07:42.304518] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1547f60) on tqpair(0x15116c0): expected_datao=0, payload_size=4096 00:45:22.993 [2024-07-22 13:07:42.304528] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:45:22.993 [2024-07-22 13:07:42.304533] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:45:22.993 [2024-07-22 13:07:42.304542] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:45:22.993 [2024-07-22 13:07:42.304548] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:45:22.993 [2024-07-22 13:07:42.304552] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:45:22.993 [2024-07-22 13:07:42.304556] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1547f60) on tqpair=0x15116c0 00:45:22.993 [2024-07-22 13:07:42.304580] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:45:22.993 [2024-07-22 13:07:42.304586] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:45:22.993 [2024-07-22 13:07:42.304591] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:45:22.993 [2024-07-22 13:07:42.304596] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:45:22.993 [2024-07-22 13:07:42.304601] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:45:22.993 [2024-07-22 13:07:42.304606] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:45:22.993 [2024-07-22 13:07:42.304620] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:45:22.993 [2024-07-22 13:07:42.304628] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:22.993 [2024-07-22 13:07:42.304633] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:22.993 [2024-07-22 13:07:42.304637] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15116c0) 00:45:22.993 [2024-07-22 13:07:42.304644] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:45:22.993 [2024-07-22 13:07:42.304666] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1547f60, cid 0, qid 0 00:45:22.993 [2024-07-22 13:07:42.304741] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:45:22.993 [2024-07-22 13:07:42.304747] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:45:22.993 [2024-07-22 13:07:42.304751] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:45:22.993 [2024-07-22 13:07:42.304755] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1547f60) on tqpair=0x15116c0 00:45:22.993 [2024-07-22 13:07:42.304764] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:22.993 [2024-07-22 13:07:42.304768] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:22.993 [2024-07-22 13:07:42.304772] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15116c0) 00:45:22.993 [2024-07-22 13:07:42.304779] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:45:22.993 [2024-07-22 13:07:42.304785] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:22.993 [2024-07-22 13:07:42.304789] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:22.993 [2024-07-22 13:07:42.304793] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x15116c0) 00:45:22.993 [2024-07-22 13:07:42.304799] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:45:22.993 [2024-07-22 13:07:42.304805] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:22.993 [2024-07-22 13:07:42.304809] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:22.993 [2024-07-22 13:07:42.304812] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x15116c0) 00:45:22.993 [2024-07-22 13:07:42.304818] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:45:22.993 [2024-07-22 13:07:42.304824] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:22.993 [2024-07-22 13:07:42.304828] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:22.993 [2024-07-22 13:07:42.304831] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15116c0) 00:45:22.993 [2024-07-22 13:07:42.304837] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:45:22.993 [2024-07-22 13:07:42.304842] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:45:22.993 [2024-07-22 13:07:42.304856] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:45:22.993 [2024-07-22 13:07:42.304863] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:22.993 [2024-07-22 13:07:42.304867] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:22.993 [2024-07-22 13:07:42.304871] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15116c0) 00:45:22.993 [2024-07-22 13:07:42.304878] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:22.993 [2024-07-22 13:07:42.304898] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1547f60, cid 0, qid 0 00:45:22.993 [2024-07-22 13:07:42.304906] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15480c0, cid 1, qid 0 00:45:22.993 [2024-07-22 13:07:42.304911] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1548220, cid 2, qid 0 00:45:22.993 [2024-07-22 13:07:42.304916] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1548380, cid 3, qid 0 00:45:22.993 [2024-07-22 13:07:42.304920] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15484e0, cid 4, qid 0 00:45:22.993 [2024-07-22 13:07:42.305016] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:45:22.993 [2024-07-22 13:07:42.305022] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:45:22.993 [2024-07-22 13:07:42.305026] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:45:22.993 [2024-07-22 13:07:42.305030] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15484e0) on tqpair=0x15116c0 00:45:22.993 [2024-07-22 13:07:42.305037] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:45:22.993 [2024-07-22 13:07:42.305042] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:45:22.993 [2024-07-22 13:07:42.305053] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:22.993 [2024-07-22 13:07:42.305058] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:22.993 [2024-07-22 13:07:42.305063] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15116c0) 00:45:22.993 [2024-07-22 13:07:42.305071] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:22.993 [2024-07-22 13:07:42.305092] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15484e0, cid 4, qid 0 00:45:22.993 [2024-07-22 13:07:42.305167] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:45:22.993 [2024-07-22 13:07:42.305176] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:45:22.993 [2024-07-22 13:07:42.305179] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:45:22.993 [2024-07-22 13:07:42.305183] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15116c0): datao=0, datal=4096, cccid=4 00:45:22.993 [2024-07-22 13:07:42.305188] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15484e0) on tqpair(0x15116c0): expected_datao=0, payload_size=4096 00:45:22.993 [2024-07-22 13:07:42.305196] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:45:22.993 [2024-07-22 13:07:42.305200] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:45:22.993 [2024-07-22 13:07:42.305209] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:45:22.993 [2024-07-22 13:07:42.305215] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:45:22.993 [2024-07-22 13:07:42.305219] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:45:22.993 [2024-07-22 13:07:42.305223] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15484e0) on tqpair=0x15116c0 00:45:22.993 [2024-07-22 13:07:42.305237] nvme_ctrlr.c:4024:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:45:22.993 [2024-07-22 13:07:42.305271] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:22.993 [2024-07-22 13:07:42.305278] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:22.993 [2024-07-22 13:07:42.305282] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15116c0) 00:45:22.993 [2024-07-22 13:07:42.305289] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:22.993 [2024-07-22 13:07:42.305297] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:22.993 [2024-07-22 13:07:42.305301] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:22.993 [2024-07-22 13:07:42.305305] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x15116c0) 00:45:22.993 [2024-07-22 13:07:42.305311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:45:22.993 [2024-07-22 13:07:42.305339] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15484e0, cid 4, qid 0 00:45:22.993 [2024-07-22 13:07:42.305347] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1548640, cid 5, qid 0 00:45:22.993 [2024-07-22 13:07:42.305460] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:45:22.994 [2024-07-22 13:07:42.305467] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:45:22.994 [2024-07-22 13:07:42.305471] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:45:22.994 [2024-07-22 13:07:42.305475] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15116c0): datao=0, datal=1024, cccid=4 00:45:22.994 [2024-07-22 13:07:42.305480] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15484e0) on tqpair(0x15116c0): expected_datao=0, payload_size=1024 00:45:22.994 [2024-07-22 13:07:42.305487] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:45:22.994 [2024-07-22 13:07:42.305491] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:45:22.994 [2024-07-22 13:07:42.305497] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:45:22.994 [2024-07-22 13:07:42.305503] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:45:22.994 [2024-07-22 13:07:42.305507] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:45:22.994 [2024-07-22 13:07:42.305511] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1548640) on tqpair=0x15116c0 00:45:22.994 [2024-07-22 13:07:42.346188] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:45:22.994 [2024-07-22 13:07:42.346206] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:45:22.994 [2024-07-22 13:07:42.346227] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:45:22.994 [2024-07-22 13:07:42.346231] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15484e0) on tqpair=0x15116c0 00:45:22.994 [2024-07-22 13:07:42.346245] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:22.994 [2024-07-22 13:07:42.346250] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:22.994 [2024-07-22 13:07:42.346254] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15116c0) 00:45:22.994 [2024-07-22 13:07:42.346262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:22.994 [2024-07-22 13:07:42.346291] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15484e0, cid 4, qid 0 00:45:22.994 [2024-07-22 13:07:42.346367] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:45:22.994 [2024-07-22 13:07:42.346374] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:45:22.994 [2024-07-22 13:07:42.346377] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:45:22.994 [2024-07-22 13:07:42.346381] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15116c0): datao=0, datal=3072, cccid=4 00:45:22.994 [2024-07-22 13:07:42.346385] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15484e0) on tqpair(0x15116c0): expected_datao=0, payload_size=3072 00:45:22.994 [2024-07-22 13:07:42.346409] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:45:22.994 [2024-07-22 13:07:42.346414] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:45:22.994 [2024-07-22 13:07:42.346422] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:45:22.994 [2024-07-22 13:07:42.346427] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:45:22.994 [2024-07-22 13:07:42.346431] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:45:22.994 [2024-07-22 13:07:42.346435] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15484e0) on tqpair=0x15116c0 00:45:22.994 [2024-07-22 13:07:42.346461] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:22.994 [2024-07-22 13:07:42.346481] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:22.994 [2024-07-22 13:07:42.346485] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15116c0) 00:45:22.994 [2024-07-22 13:07:42.346493] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:22.994 [2024-07-22 13:07:42.346519] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15484e0, cid 4, qid 0 00:45:22.994 [2024-07-22 13:07:42.346629] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:45:22.994 [2024-07-22 13:07:42.346637] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:45:22.994 [2024-07-22 13:07:42.346641] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:45:22.994 [2024-07-22 13:07:42.346645] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15116c0): datao=0, datal=8, cccid=4 00:45:22.994 [2024-07-22 13:07:42.346650] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15484e0) on tqpair(0x15116c0): expected_datao=0, payload_size=8 00:45:22.994 [2024-07-22 13:07:42.346658] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:45:22.994 [2024-07-22 13:07:42.346662] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:45:22.994 [2024-07-22 13:07:42.391228] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:45:22.994 [2024-07-22 13:07:42.391249] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:45:22.994 [2024-07-22 13:07:42.391270] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:45:22.994 [2024-07-22 13:07:42.391275] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15484e0) on tqpair=0x15116c0 00:45:22.994 ===================================================== 00:45:22.994 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:45:22.994 ===================================================== 00:45:22.994 Controller Capabilities/Features 00:45:22.994 ================================ 00:45:22.994 Vendor ID: 0000 00:45:22.994 Subsystem Vendor ID: 0000 00:45:22.994 Serial Number: .................... 00:45:22.994 Model Number: ........................................ 00:45:22.994 Firmware Version: 24.01.1 00:45:22.994 Recommended Arb Burst: 0 00:45:22.994 IEEE OUI Identifier: 00 00 00 00:45:22.994 Multi-path I/O 00:45:22.994 May have multiple subsystem ports: No 00:45:22.994 May have multiple controllers: No 00:45:22.994 Associated with SR-IOV VF: No 00:45:22.994 Max Data Transfer Size: 131072 00:45:22.994 Max Number of Namespaces: 0 00:45:22.994 Max Number of I/O Queues: 1024 00:45:22.994 NVMe Specification Version (VS): 1.3 00:45:22.994 NVMe Specification Version (Identify): 1.3 00:45:22.994 Maximum Queue Entries: 128 00:45:22.994 Contiguous Queues Required: Yes 00:45:22.994 Arbitration Mechanisms Supported 00:45:22.994 Weighted Round Robin: Not Supported 00:45:22.994 Vendor Specific: Not Supported 00:45:22.994 Reset Timeout: 15000 ms 00:45:22.994 Doorbell Stride: 4 bytes 00:45:22.994 NVM Subsystem Reset: Not Supported 00:45:22.994 Command Sets Supported 00:45:22.994 NVM Command Set: Supported 00:45:22.994 Boot Partition: Not Supported 00:45:22.994 Memory Page Size Minimum: 4096 bytes 00:45:22.994 Memory Page Size Maximum: 4096 bytes 00:45:22.994 Persistent Memory Region: Not Supported 00:45:22.994 Optional Asynchronous Events Supported 00:45:22.994 Namespace Attribute Notices: Not Supported 00:45:22.994 Firmware Activation Notices: Not Supported 00:45:22.994 ANA Change Notices: Not Supported 00:45:22.994 PLE Aggregate Log Change Notices: Not Supported 00:45:22.994 LBA Status Info Alert Notices: Not Supported 00:45:22.994 EGE Aggregate Log Change Notices: Not Supported 00:45:22.994 Normal NVM Subsystem Shutdown event: Not Supported 00:45:22.994 Zone Descriptor Change Notices: Not Supported 00:45:22.994 Discovery Log Change Notices: Supported 00:45:22.994 Controller Attributes 00:45:22.994 128-bit Host Identifier: Not Supported 00:45:22.994 Non-Operational Permissive Mode: Not Supported 00:45:22.994 NVM Sets: Not Supported 00:45:22.994 Read Recovery Levels: Not Supported 00:45:22.994 Endurance Groups: Not Supported 00:45:22.994 Predictable Latency Mode: Not Supported 00:45:22.994 Traffic Based Keep ALive: Not Supported 00:45:22.994 Namespace Granularity: Not Supported 00:45:22.994 SQ Associations: Not Supported 00:45:22.994 UUID List: Not Supported 00:45:22.994 Multi-Domain Subsystem: Not Supported 00:45:22.994 Fixed Capacity Management: Not Supported 00:45:22.994 Variable Capacity Management: Not Supported 00:45:22.994 Delete Endurance Group: Not Supported 00:45:22.994 Delete NVM Set: Not Supported 00:45:22.994 Extended LBA Formats Supported: Not Supported 00:45:22.994 Flexible Data Placement Supported: Not Supported 00:45:22.994 00:45:22.994 Controller Memory Buffer Support 00:45:22.994 ================================ 00:45:22.994 Supported: No 00:45:22.994 00:45:22.994 Persistent Memory Region Support 00:45:22.994 ================================ 00:45:22.994 Supported: No 00:45:22.994 00:45:22.994 Admin Command Set Attributes 00:45:22.994 ============================ 00:45:22.994 Security Send/Receive: Not Supported 00:45:22.994 Format NVM: Not Supported 00:45:22.994 Firmware Activate/Download: Not Supported 00:45:22.994 Namespace Management: Not Supported 00:45:22.994 Device Self-Test: Not Supported 00:45:22.994 Directives: Not Supported 00:45:22.994 NVMe-MI: Not Supported 00:45:22.994 Virtualization Management: Not Supported 00:45:22.994 Doorbell Buffer Config: Not Supported 00:45:22.994 Get LBA Status Capability: Not Supported 00:45:22.994 Command & Feature Lockdown Capability: Not Supported 00:45:22.994 Abort Command Limit: 1 00:45:22.995 Async Event Request Limit: 4 00:45:22.995 Number of Firmware Slots: N/A 00:45:22.995 Firmware Slot 1 Read-Only: N/A 00:45:22.995 Firmware Activation Without Reset: N/A 00:45:22.995 Multiple Update Detection Support: N/A 00:45:22.995 Firmware Update Granularity: No Information Provided 00:45:22.995 Per-Namespace SMART Log: No 00:45:22.995 Asymmetric Namespace Access Log Page: Not Supported 00:45:22.995 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:45:22.995 Command Effects Log Page: Not Supported 00:45:22.995 Get Log Page Extended Data: Supported 00:45:22.995 Telemetry Log Pages: Not Supported 00:45:22.995 Persistent Event Log Pages: Not Supported 00:45:22.995 Supported Log Pages Log Page: May Support 00:45:22.995 Commands Supported & Effects Log Page: Not Supported 00:45:22.995 Feature Identifiers & Effects Log Page:May Support 00:45:22.995 NVMe-MI Commands & Effects Log Page: May Support 00:45:22.995 Data Area 4 for Telemetry Log: Not Supported 00:45:22.995 Error Log Page Entries Supported: 128 00:45:22.995 Keep Alive: Not Supported 00:45:22.995 00:45:22.995 NVM Command Set Attributes 00:45:22.995 ========================== 00:45:22.995 Submission Queue Entry Size 00:45:22.995 Max: 1 00:45:22.995 Min: 1 00:45:22.995 Completion Queue Entry Size 00:45:22.995 Max: 1 00:45:22.995 Min: 1 00:45:22.995 Number of Namespaces: 0 00:45:22.995 Compare Command: Not Supported 00:45:22.995 Write Uncorrectable Command: Not Supported 00:45:22.995 Dataset Management Command: Not Supported 00:45:22.995 Write Zeroes Command: Not Supported 00:45:22.995 Set Features Save Field: Not Supported 00:45:22.995 Reservations: Not Supported 00:45:22.995 Timestamp: Not Supported 00:45:22.995 Copy: Not Supported 00:45:22.995 Volatile Write Cache: Not Present 00:45:22.995 Atomic Write Unit (Normal): 1 00:45:22.995 Atomic Write Unit (PFail): 1 00:45:22.995 Atomic Compare & Write Unit: 1 00:45:22.995 Fused Compare & Write: Supported 00:45:22.995 Scatter-Gather List 00:45:22.995 SGL Command Set: Supported 00:45:22.995 SGL Keyed: Supported 00:45:22.995 SGL Bit Bucket Descriptor: Not Supported 00:45:22.995 SGL Metadata Pointer: Not Supported 00:45:22.995 Oversized SGL: Not Supported 00:45:22.995 SGL Metadata Address: Not Supported 00:45:22.995 SGL Offset: Supported 00:45:22.995 Transport SGL Data Block: Not Supported 00:45:22.995 Replay Protected Memory Block: Not Supported 00:45:22.995 00:45:22.995 Firmware Slot Information 00:45:22.995 ========================= 00:45:22.995 Active slot: 0 00:45:22.995 00:45:22.995 00:45:22.995 Error Log 00:45:22.995 ========= 00:45:22.995 00:45:22.995 Active Namespaces 00:45:22.995 ================= 00:45:22.995 Discovery Log Page 00:45:22.995 ================== 00:45:22.995 Generation Counter: 2 00:45:22.995 Number of Records: 2 00:45:22.995 Record Format: 0 00:45:22.995 00:45:22.995 Discovery Log Entry 0 00:45:22.995 ---------------------- 00:45:22.995 Transport Type: 3 (TCP) 00:45:22.995 Address Family: 1 (IPv4) 00:45:22.995 Subsystem Type: 3 (Current Discovery Subsystem) 00:45:22.995 Entry Flags: 00:45:22.995 Duplicate Returned Information: 1 00:45:22.995 Explicit Persistent Connection Support for Discovery: 1 00:45:22.995 Transport Requirements: 00:45:22.995 Secure Channel: Not Required 00:45:22.995 Port ID: 0 (0x0000) 00:45:22.995 Controller ID: 65535 (0xffff) 00:45:22.995 Admin Max SQ Size: 128 00:45:22.995 Transport Service Identifier: 4420 00:45:22.995 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:45:22.995 Transport Address: 10.0.0.2 00:45:22.995 Discovery Log Entry 1 00:45:22.995 ---------------------- 00:45:22.995 Transport Type: 3 (TCP) 00:45:22.995 Address Family: 1 (IPv4) 00:45:22.995 Subsystem Type: 2 (NVM Subsystem) 00:45:22.995 Entry Flags: 00:45:22.995 Duplicate Returned Information: 0 00:45:22.995 Explicit Persistent Connection Support for Discovery: 0 00:45:22.995 Transport Requirements: 00:45:22.995 Secure Channel: Not Required 00:45:22.995 Port ID: 0 (0x0000) 00:45:22.995 Controller ID: 65535 (0xffff) 00:45:22.995 Admin Max SQ Size: 128 00:45:22.995 Transport Service Identifier: 4420 00:45:22.995 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:45:22.995 Transport Address: 10.0.0.2 [2024-07-22 13:07:42.391405] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:45:22.995 [2024-07-22 13:07:42.391425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:22.995 [2024-07-22 13:07:42.391432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:22.995 [2024-07-22 13:07:42.391438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:22.995 [2024-07-22 13:07:42.391444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:22.995 [2024-07-22 13:07:42.391453] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:22.995 [2024-07-22 13:07:42.391457] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:22.995 [2024-07-22 13:07:42.391461] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15116c0) 00:45:22.995 [2024-07-22 13:07:42.391469] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:22.995 [2024-07-22 13:07:42.391495] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1548380, cid 3, qid 0 00:45:22.995 [2024-07-22 13:07:42.391596] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:45:22.995 [2024-07-22 13:07:42.391603] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:45:22.995 [2024-07-22 13:07:42.391607] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:45:22.995 [2024-07-22 13:07:42.391611] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1548380) on tqpair=0x15116c0 00:45:22.995 [2024-07-22 13:07:42.391620] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:22.995 [2024-07-22 13:07:42.391624] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:22.995 [2024-07-22 13:07:42.391627] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15116c0) 00:45:22.995 [2024-07-22 13:07:42.391635] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:22.995 [2024-07-22 13:07:42.391658] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1548380, cid 3, qid 0 00:45:22.995 [2024-07-22 13:07:42.391732] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:45:22.995 [2024-07-22 13:07:42.391739] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:45:22.995 [2024-07-22 13:07:42.391742] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:45:22.995 [2024-07-22 13:07:42.391746] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1548380) on tqpair=0x15116c0 00:45:22.995 [2024-07-22 13:07:42.391753] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:45:22.995 [2024-07-22 13:07:42.391758] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:45:22.995 [2024-07-22 13:07:42.391768] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:22.995 [2024-07-22 13:07:42.391772] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:22.995 [2024-07-22 13:07:42.391776] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15116c0) 00:45:22.995 [2024-07-22 13:07:42.391783] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:22.995 [2024-07-22 13:07:42.391801] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1548380, cid 3, qid 0 00:45:22.995 [2024-07-22 13:07:42.391854] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:45:22.995 [2024-07-22 13:07:42.391861] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:45:22.995 [2024-07-22 13:07:42.391864] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:45:22.995 [2024-07-22 13:07:42.391868] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1548380) on tqpair=0x15116c0 00:45:22.995 [2024-07-22 13:07:42.391880] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:22.995 [2024-07-22 13:07:42.391885] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:22.995 [2024-07-22 13:07:42.391888] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15116c0) 00:45:22.995 [2024-07-22 13:07:42.391895] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:22.995 [2024-07-22 13:07:42.391927] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1548380, cid 3, qid 0 00:45:22.995 [2024-07-22 13:07:42.391995] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:45:22.995 [2024-07-22 13:07:42.392002] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:45:22.995 [2024-07-22 13:07:42.392005] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:45:22.995 [2024-07-22 13:07:42.392009] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1548380) on tqpair=0x15116c0 00:45:22.995 [2024-07-22 13:07:42.392020] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:22.996 [2024-07-22 13:07:42.392024] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:22.996 [2024-07-22 13:07:42.392028] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15116c0) 00:45:22.996 [2024-07-22 13:07:42.392035] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:22.996 [2024-07-22 13:07:42.392052] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1548380, cid 3, qid 0 00:45:22.996 [2024-07-22 13:07:42.392105] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:45:22.996 [2024-07-22 13:07:42.392111] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:45:22.996 [2024-07-22 13:07:42.392115] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:45:22.996 [2024-07-22 13:07:42.392119] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1548380) on tqpair=0x15116c0 00:45:22.996 [2024-07-22 13:07:42.392130] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:22.996 [2024-07-22 13:07:42.392134] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:22.996 [2024-07-22 13:07:42.392138] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15116c0) 00:45:22.996 [2024-07-22 13:07:42.392145] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:22.996 [2024-07-22 13:07:42.392161] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1548380, cid 3, qid 0 00:45:22.996 [2024-07-22 13:07:42.392229] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:45:22.996 [2024-07-22 13:07:42.392237] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:45:22.996 [2024-07-22 13:07:42.392241] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:45:22.996 [2024-07-22 13:07:42.392245] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1548380) on tqpair=0x15116c0 00:45:22.996 [2024-07-22 13:07:42.392256] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:22.996 [2024-07-22 13:07:42.392261] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:22.996 [2024-07-22 13:07:42.392264] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15116c0) 00:45:22.996 [2024-07-22 13:07:42.392272] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:22.996 [2024-07-22 13:07:42.392292] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1548380, cid 3, qid 0 00:45:22.996 [2024-07-22 13:07:42.392348] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:45:22.996 [2024-07-22 13:07:42.392355] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:45:22.996 [2024-07-22 13:07:42.392358] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:45:22.996 [2024-07-22 13:07:42.392362] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1548380) on tqpair=0x15116c0 00:45:22.996 [2024-07-22 13:07:42.392373] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:22.996 [2024-07-22 13:07:42.392379] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:22.996 [2024-07-22 13:07:42.392383] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15116c0) 00:45:22.996 [2024-07-22 13:07:42.392390] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:22.996 [2024-07-22 13:07:42.392408] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1548380, cid 3, qid 0 00:45:22.996 [2024-07-22 13:07:42.392457] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:45:22.996 [2024-07-22 13:07:42.392463] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:45:22.996 [2024-07-22 13:07:42.392467] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:45:22.996 [2024-07-22 13:07:42.392471] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1548380) on tqpair=0x15116c0 00:45:22.996 [2024-07-22 13:07:42.392482] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:22.996 [2024-07-22 13:07:42.392486] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:22.996 [2024-07-22 13:07:42.392489] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15116c0) 00:45:22.996 [2024-07-22 13:07:42.392497] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:22.996 [2024-07-22 13:07:42.392514] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1548380, cid 3, qid 0 00:45:22.996 [2024-07-22 13:07:42.392564] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:45:22.996 [2024-07-22 13:07:42.392571] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:45:22.996 [2024-07-22 13:07:42.392574] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:45:22.996 [2024-07-22 13:07:42.392578] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1548380) on tqpair=0x15116c0 00:45:22.996 [2024-07-22 13:07:42.392589] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:22.996 [2024-07-22 13:07:42.392593] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:22.996 [2024-07-22 13:07:42.392597] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15116c0) 00:45:22.996 [2024-07-22 13:07:42.392604] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:22.996 [2024-07-22 13:07:42.392621] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1548380, cid 3, qid 0 00:45:22.996 [2024-07-22 13:07:42.392675] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:45:22.996 [2024-07-22 13:07:42.392681] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:45:22.996 [2024-07-22 13:07:42.392685] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:45:22.996 [2024-07-22 13:07:42.392688] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1548380) on tqpair=0x15116c0 00:45:22.996 [2024-07-22 13:07:42.392699] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:22.996 [2024-07-22 13:07:42.392703] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:22.996 [2024-07-22 13:07:42.392707] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15116c0) 00:45:22.996 [2024-07-22 13:07:42.392714] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:22.996 [2024-07-22 13:07:42.392731] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1548380, cid 3, qid 0 00:45:22.996 [2024-07-22 13:07:42.392781] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:45:22.996 [2024-07-22 13:07:42.392788] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:45:22.996 [2024-07-22 13:07:42.392791] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:45:22.996 [2024-07-22 13:07:42.392795] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1548380) on tqpair=0x15116c0 00:45:22.996 [2024-07-22 13:07:42.392806] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:22.996 [2024-07-22 13:07:42.392810] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:22.996 [2024-07-22 13:07:42.392814] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15116c0) 00:45:22.996 [2024-07-22 13:07:42.392821] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:22.996 [2024-07-22 13:07:42.392838] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1548380, cid 3, qid 0 00:45:22.996 [2024-07-22 13:07:42.392889] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:45:22.996 [2024-07-22 13:07:42.392895] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:45:22.996 [2024-07-22 13:07:42.392899] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:45:22.996 [2024-07-22 13:07:42.392903] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1548380) on tqpair=0x15116c0 00:45:22.996 [2024-07-22 13:07:42.392914] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:22.996 [2024-07-22 13:07:42.392918] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:22.996 [2024-07-22 13:07:42.392921] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15116c0) 00:45:22.996 [2024-07-22 13:07:42.392928] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:22.996 [2024-07-22 13:07:42.392945] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1548380, cid 3, qid 0 00:45:22.996 [2024-07-22 13:07:42.393001] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:45:22.996 [2024-07-22 13:07:42.393008] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:45:22.996 [2024-07-22 13:07:42.393011] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:45:22.996 [2024-07-22 13:07:42.393015] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1548380) on tqpair=0x15116c0 00:45:22.996 [2024-07-22 13:07:42.393026] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:22.996 [2024-07-22 13:07:42.393030] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:22.996 [2024-07-22 13:07:42.393034] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15116c0) 00:45:22.996 [2024-07-22 13:07:42.393041] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:22.997 [2024-07-22 13:07:42.393057] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1548380, cid 3, qid 0 00:45:22.997 [2024-07-22 13:07:42.393108] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:45:22.997 [2024-07-22 13:07:42.393114] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:45:22.997 [2024-07-22 13:07:42.393118] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:45:22.997 [2024-07-22 13:07:42.393122] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1548380) on tqpair=0x15116c0 00:45:22.997 [2024-07-22 13:07:42.393133] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:22.997 [2024-07-22 13:07:42.393147] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:22.997 [2024-07-22 13:07:42.393151] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15116c0) 00:45:22.997 [2024-07-22 13:07:42.393159] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:22.997 [2024-07-22 13:07:42.393178] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1548380, cid 3, qid 0 00:45:22.997 [2024-07-22 13:07:42.393239] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:45:22.997 [2024-07-22 13:07:42.393260] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:45:22.997 [2024-07-22 13:07:42.393264] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:45:22.997 [2024-07-22 13:07:42.393284] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1548380) on tqpair=0x15116c0 00:45:22.997 [2024-07-22 13:07:42.393295] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:22.997 [2024-07-22 13:07:42.393300] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:22.997 [2024-07-22 13:07:42.393303] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15116c0) 00:45:22.997 [2024-07-22 13:07:42.393310] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:22.997 [2024-07-22 13:07:42.393327] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1548380, cid 3, qid 0 00:45:22.997 [2024-07-22 13:07:42.393378] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:45:22.997 [2024-07-22 13:07:42.393384] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:45:22.997 [2024-07-22 13:07:42.393388] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:45:22.997 [2024-07-22 13:07:42.393392] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1548380) on tqpair=0x15116c0 00:45:22.997 [2024-07-22 13:07:42.393402] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:22.997 [2024-07-22 13:07:42.393407] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:22.997 [2024-07-22 13:07:42.393410] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15116c0) 00:45:22.997 [2024-07-22 13:07:42.393417] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:22.997 [2024-07-22 13:07:42.393434] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1548380, cid 3, qid 0 00:45:22.997 [2024-07-22 13:07:42.393486] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:45:22.997 [2024-07-22 13:07:42.393493] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:45:22.997 [2024-07-22 13:07:42.393496] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:45:22.997 [2024-07-22 13:07:42.393500] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1548380) on tqpair=0x15116c0 00:45:22.997 [2024-07-22 13:07:42.393511] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:22.997 [2024-07-22 13:07:42.393515] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:22.997 [2024-07-22 13:07:42.393519] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15116c0) 00:45:22.997 [2024-07-22 13:07:42.393526] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:22.997 [2024-07-22 13:07:42.393542] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1548380, cid 3, qid 0 00:45:22.997 [2024-07-22 13:07:42.393593] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:45:22.997 [2024-07-22 13:07:42.393599] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:45:22.997 [2024-07-22 13:07:42.393603] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:45:22.997 [2024-07-22 13:07:42.393607] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1548380) on tqpair=0x15116c0 00:45:22.997 [2024-07-22 13:07:42.393617] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:22.997 [2024-07-22 13:07:42.393622] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:22.997 [2024-07-22 13:07:42.393625] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15116c0) 00:45:22.997 [2024-07-22 13:07:42.393632] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:22.997 [2024-07-22 13:07:42.393650] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1548380, cid 3, qid 0 00:45:22.997 [2024-07-22 13:07:42.393736] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:45:22.997 [2024-07-22 13:07:42.393742] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:45:22.997 [2024-07-22 13:07:42.393745] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:45:22.997 [2024-07-22 13:07:42.393749] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1548380) on tqpair=0x15116c0 00:45:22.997 [2024-07-22 13:07:42.393760] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:22.997 [2024-07-22 13:07:42.393764] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:22.997 [2024-07-22 13:07:42.393768] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15116c0) 00:45:22.997 [2024-07-22 13:07:42.393775] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:22.997 [2024-07-22 13:07:42.393792] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1548380, cid 3, qid 0 00:45:22.997 [2024-07-22 13:07:42.393845] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:45:22.997 [2024-07-22 13:07:42.393851] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:45:22.997 [2024-07-22 13:07:42.393855] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:45:22.997 [2024-07-22 13:07:42.393859] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1548380) on tqpair=0x15116c0 00:45:22.997 [2024-07-22 13:07:42.393869] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:22.997 [2024-07-22 13:07:42.393874] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:22.997 [2024-07-22 13:07:42.393877] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15116c0) 00:45:22.997 [2024-07-22 13:07:42.393884] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:22.997 [2024-07-22 13:07:42.393901] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1548380, cid 3, qid 0 00:45:22.997 [2024-07-22 13:07:42.393952] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:45:22.997 [2024-07-22 13:07:42.393959] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:45:22.997 [2024-07-22 13:07:42.393962] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:45:22.997 [2024-07-22 13:07:42.393966] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1548380) on tqpair=0x15116c0 00:45:22.997 [2024-07-22 13:07:42.393977] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:22.997 [2024-07-22 13:07:42.393981] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:22.997 [2024-07-22 13:07:42.393985] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15116c0) 00:45:22.997 [2024-07-22 13:07:42.393992] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:22.997 [2024-07-22 13:07:42.394009] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1548380, cid 3, qid 0 00:45:22.997 [2024-07-22 13:07:42.394059] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:45:22.997 [2024-07-22 13:07:42.394066] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:45:22.997 [2024-07-22 13:07:42.394069] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:45:22.997 [2024-07-22 13:07:42.394073] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1548380) on tqpair=0x15116c0 00:45:22.997 [2024-07-22 13:07:42.394084] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:22.997 [2024-07-22 13:07:42.394089] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:22.997 [2024-07-22 13:07:42.394092] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15116c0) 00:45:22.997 [2024-07-22 13:07:42.394099] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:22.997 [2024-07-22 13:07:42.394116] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1548380, cid 3, qid 0 00:45:22.997 [2024-07-22 13:07:42.394199] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:45:22.997 [2024-07-22 13:07:42.394206] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:45:22.997 [2024-07-22 13:07:42.394209] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:45:22.997 [2024-07-22 13:07:42.394213] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1548380) on tqpair=0x15116c0 00:45:22.997 [2024-07-22 13:07:42.394234] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:22.997 [2024-07-22 13:07:42.394239] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:22.997 [2024-07-22 13:07:42.394243] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15116c0) 00:45:22.997 [2024-07-22 13:07:42.394250] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:22.997 [2024-07-22 13:07:42.394269] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1548380, cid 3, qid 0 00:45:22.997 [2024-07-22 13:07:42.394325] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:45:22.997 [2024-07-22 13:07:42.394331] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:45:22.997 [2024-07-22 13:07:42.394335] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:45:22.997 [2024-07-22 13:07:42.394339] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1548380) on tqpair=0x15116c0 00:45:22.997 [2024-07-22 13:07:42.394350] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:22.997 [2024-07-22 13:07:42.394354] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:22.997 [2024-07-22 13:07:42.394358] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15116c0) 00:45:22.997 [2024-07-22 13:07:42.394365] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:22.997 [2024-07-22 13:07:42.394382] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1548380, cid 3, qid 0 00:45:22.997 [2024-07-22 13:07:42.394434] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:45:22.997 [2024-07-22 13:07:42.394440] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:45:22.997 [2024-07-22 13:07:42.394444] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:45:22.997 [2024-07-22 13:07:42.394448] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1548380) on tqpair=0x15116c0 00:45:22.997 [2024-07-22 13:07:42.394459] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:22.997 [2024-07-22 13:07:42.394463] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:22.997 [2024-07-22 13:07:42.394467] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15116c0) 00:45:22.998 [2024-07-22 13:07:42.394474] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:22.998 [2024-07-22 13:07:42.394490] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1548380, cid 3, qid 0 00:45:22.998 [2024-07-22 13:07:42.394570] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:45:22.998 [2024-07-22 13:07:42.394578] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:45:22.998 [2024-07-22 13:07:42.394582] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:45:22.998 [2024-07-22 13:07:42.394586] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1548380) on tqpair=0x15116c0 00:45:22.998 [2024-07-22 13:07:42.394598] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:22.998 [2024-07-22 13:07:42.394603] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:22.998 [2024-07-22 13:07:42.394607] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15116c0) 00:45:22.998 [2024-07-22 13:07:42.394614] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:22.998 [2024-07-22 13:07:42.394634] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1548380, cid 3, qid 0 00:45:22.998 [2024-07-22 13:07:42.394691] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:45:22.998 [2024-07-22 13:07:42.394698] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:45:22.998 [2024-07-22 13:07:42.394701] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:45:22.998 [2024-07-22 13:07:42.394705] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1548380) on tqpair=0x15116c0 00:45:22.998 [2024-07-22 13:07:42.394717] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:22.998 [2024-07-22 13:07:42.394721] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:22.998 [2024-07-22 13:07:42.394725] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15116c0) 00:45:22.998 [2024-07-22 13:07:42.394733] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:22.998 [2024-07-22 13:07:42.394751] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1548380, cid 3, qid 0 00:45:22.998 [2024-07-22 13:07:42.394816] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:45:22.998 [2024-07-22 13:07:42.394824] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:45:22.998 [2024-07-22 13:07:42.394827] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:45:22.998 [2024-07-22 13:07:42.394832] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1548380) on tqpair=0x15116c0 00:45:22.998 [2024-07-22 13:07:42.394843] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:22.998 [2024-07-22 13:07:42.394848] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:22.998 [2024-07-22 13:07:42.394851] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15116c0) 00:45:22.998 [2024-07-22 13:07:42.394859] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:22.998 [2024-07-22 13:07:42.394906] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1548380, cid 3, qid 0 00:45:22.998 [2024-07-22 13:07:42.394963] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:45:22.998 [2024-07-22 13:07:42.394969] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:45:22.998 [2024-07-22 13:07:42.394972] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:45:22.998 [2024-07-22 13:07:42.394976] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1548380) on tqpair=0x15116c0 00:45:22.998 [2024-07-22 13:07:42.394987] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:22.998 [2024-07-22 13:07:42.394991] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:22.998 [2024-07-22 13:07:42.394995] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15116c0) 00:45:22.998 [2024-07-22 13:07:42.395002] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:22.998 [2024-07-22 13:07:42.395019] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1548380, cid 3, qid 0 00:45:22.998 [2024-07-22 13:07:42.395071] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:45:22.998 [2024-07-22 13:07:42.395077] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:45:22.998 [2024-07-22 13:07:42.395081] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:45:22.998 [2024-07-22 13:07:42.395084] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1548380) on tqpair=0x15116c0 00:45:22.998 [2024-07-22 13:07:42.395095] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:22.998 [2024-07-22 13:07:42.395100] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:22.998 [2024-07-22 13:07:42.395103] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15116c0) 00:45:22.998 [2024-07-22 13:07:42.395110] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:22.998 [2024-07-22 13:07:42.395127] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1548380, cid 3, qid 0 00:45:22.998 [2024-07-22 13:07:42.399234] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:45:22.998 [2024-07-22 13:07:42.399253] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:45:22.998 [2024-07-22 13:07:42.399274] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:45:22.998 [2024-07-22 13:07:42.399279] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1548380) on tqpair=0x15116c0 00:45:22.998 [2024-07-22 13:07:42.399309] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:22.998 [2024-07-22 13:07:42.399314] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:22.998 [2024-07-22 13:07:42.399318] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15116c0) 00:45:22.998 [2024-07-22 13:07:42.399342] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:22.998 [2024-07-22 13:07:42.399368] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1548380, cid 3, qid 0 00:45:22.998 [2024-07-22 13:07:42.399425] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:45:22.998 [2024-07-22 13:07:42.399432] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:45:22.998 [2024-07-22 13:07:42.399435] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:45:22.998 [2024-07-22 13:07:42.399439] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1548380) on tqpair=0x15116c0 00:45:22.998 [2024-07-22 13:07:42.399464] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:45:23.261 00:45:23.261 13:07:42 -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:45:23.261 [2024-07-22 13:07:42.432331] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:45:23.261 [2024-07-22 13:07:42.432385] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92805 ] 00:45:23.261 [2024-07-22 13:07:42.565271] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:45:23.261 [2024-07-22 13:07:42.565344] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:45:23.261 [2024-07-22 13:07:42.565352] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:45:23.261 [2024-07-22 13:07:42.565362] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:45:23.261 [2024-07-22 13:07:42.565371] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:45:23.261 [2024-07-22 13:07:42.565483] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:45:23.261 [2024-07-22 13:07:42.565532] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x17e26c0 0 00:45:23.261 [2024-07-22 13:07:42.578182] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:45:23.261 [2024-07-22 13:07:42.578203] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:45:23.261 [2024-07-22 13:07:42.578226] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:45:23.261 [2024-07-22 13:07:42.578230] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:45:23.261 [2024-07-22 13:07:42.578272] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:23.261 [2024-07-22 13:07:42.578279] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:23.261 [2024-07-22 13:07:42.578284] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17e26c0) 00:45:23.261 [2024-07-22 13:07:42.578294] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:45:23.261 [2024-07-22 13:07:42.578324] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1818f60, cid 0, qid 0 00:45:23.261 [2024-07-22 13:07:42.586184] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:45:23.261 [2024-07-22 13:07:42.586205] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:45:23.261 [2024-07-22 13:07:42.586227] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:45:23.261 [2024-07-22 13:07:42.586232] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1818f60) on tqpair=0x17e26c0 00:45:23.261 [2024-07-22 13:07:42.586244] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:45:23.261 [2024-07-22 13:07:42.586251] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:45:23.261 [2024-07-22 13:07:42.586257] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:45:23.261 [2024-07-22 13:07:42.586273] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:23.261 [2024-07-22 13:07:42.586278] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:23.261 [2024-07-22 13:07:42.586282] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17e26c0) 00:45:23.261 [2024-07-22 13:07:42.586291] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:23.261 [2024-07-22 13:07:42.586321] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1818f60, cid 0, qid 0 00:45:23.261 [2024-07-22 13:07:42.586385] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:45:23.261 [2024-07-22 13:07:42.586393] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:45:23.261 [2024-07-22 13:07:42.586403] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:45:23.261 [2024-07-22 13:07:42.586407] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1818f60) on tqpair=0x17e26c0 00:45:23.261 [2024-07-22 13:07:42.586414] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:45:23.261 [2024-07-22 13:07:42.586422] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:45:23.261 [2024-07-22 13:07:42.586430] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:23.261 [2024-07-22 13:07:42.586434] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:23.261 [2024-07-22 13:07:42.586438] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17e26c0) 00:45:23.261 [2024-07-22 13:07:42.586461] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:23.261 [2024-07-22 13:07:42.586497] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1818f60, cid 0, qid 0 00:45:23.261 [2024-07-22 13:07:42.586578] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:45:23.261 [2024-07-22 13:07:42.586586] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:45:23.261 [2024-07-22 13:07:42.586590] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:45:23.261 [2024-07-22 13:07:42.586595] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1818f60) on tqpair=0x17e26c0 00:45:23.261 [2024-07-22 13:07:42.586603] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:45:23.261 [2024-07-22 13:07:42.586613] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:45:23.261 [2024-07-22 13:07:42.586621] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:23.261 [2024-07-22 13:07:42.586625] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:23.261 [2024-07-22 13:07:42.586630] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17e26c0) 00:45:23.261 [2024-07-22 13:07:42.586638] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:23.261 [2024-07-22 13:07:42.586659] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1818f60, cid 0, qid 0 00:45:23.261 [2024-07-22 13:07:42.586716] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:45:23.261 [2024-07-22 13:07:42.586723] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:45:23.261 [2024-07-22 13:07:42.586727] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:45:23.261 [2024-07-22 13:07:42.586732] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1818f60) on tqpair=0x17e26c0 00:45:23.261 [2024-07-22 13:07:42.586739] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:45:23.261 [2024-07-22 13:07:42.586751] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:23.261 [2024-07-22 13:07:42.586756] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:23.261 [2024-07-22 13:07:42.586760] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17e26c0) 00:45:23.261 [2024-07-22 13:07:42.586768] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:23.261 [2024-07-22 13:07:42.586787] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1818f60, cid 0, qid 0 00:45:23.261 [2024-07-22 13:07:42.586846] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:45:23.261 [2024-07-22 13:07:42.586853] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:45:23.261 [2024-07-22 13:07:42.586858] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:45:23.261 [2024-07-22 13:07:42.586863] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1818f60) on tqpair=0x17e26c0 00:45:23.261 [2024-07-22 13:07:42.586869] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:45:23.262 [2024-07-22 13:07:42.586890] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:45:23.262 [2024-07-22 13:07:42.586899] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:45:23.262 [2024-07-22 13:07:42.587005] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:45:23.262 [2024-07-22 13:07:42.587009] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:45:23.262 [2024-07-22 13:07:42.587018] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:23.262 [2024-07-22 13:07:42.587023] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:23.262 [2024-07-22 13:07:42.587027] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17e26c0) 00:45:23.262 [2024-07-22 13:07:42.587034] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:23.262 [2024-07-22 13:07:42.587054] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1818f60, cid 0, qid 0 00:45:23.262 [2024-07-22 13:07:42.587110] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:45:23.262 [2024-07-22 13:07:42.587117] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:45:23.262 [2024-07-22 13:07:42.587121] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:45:23.262 [2024-07-22 13:07:42.587126] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1818f60) on tqpair=0x17e26c0 00:45:23.262 [2024-07-22 13:07:42.587133] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:45:23.262 [2024-07-22 13:07:42.587143] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:23.262 [2024-07-22 13:07:42.587148] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:23.262 [2024-07-22 13:07:42.587153] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17e26c0) 00:45:23.262 [2024-07-22 13:07:42.587160] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:23.262 [2024-07-22 13:07:42.587193] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1818f60, cid 0, qid 0 00:45:23.262 [2024-07-22 13:07:42.587255] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:45:23.262 [2024-07-22 13:07:42.587262] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:45:23.262 [2024-07-22 13:07:42.587266] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:45:23.262 [2024-07-22 13:07:42.587271] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1818f60) on tqpair=0x17e26c0 00:45:23.262 [2024-07-22 13:07:42.587277] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:45:23.262 [2024-07-22 13:07:42.587283] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:45:23.262 [2024-07-22 13:07:42.587291] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:45:23.262 [2024-07-22 13:07:42.587307] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:45:23.262 [2024-07-22 13:07:42.587317] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:23.262 [2024-07-22 13:07:42.587321] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:23.262 [2024-07-22 13:07:42.587325] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17e26c0) 00:45:23.262 [2024-07-22 13:07:42.587333] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:23.262 [2024-07-22 13:07:42.587354] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1818f60, cid 0, qid 0 00:45:23.262 [2024-07-22 13:07:42.587454] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:45:23.262 [2024-07-22 13:07:42.587463] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:45:23.262 [2024-07-22 13:07:42.587467] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:45:23.262 [2024-07-22 13:07:42.587472] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x17e26c0): datao=0, datal=4096, cccid=0 00:45:23.262 [2024-07-22 13:07:42.587477] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1818f60) on tqpair(0x17e26c0): expected_datao=0, payload_size=4096 00:45:23.262 [2024-07-22 13:07:42.587486] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:45:23.262 [2024-07-22 13:07:42.587490] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:45:23.262 [2024-07-22 13:07:42.587499] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:45:23.262 [2024-07-22 13:07:42.587505] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:45:23.262 [2024-07-22 13:07:42.587509] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:45:23.262 [2024-07-22 13:07:42.587514] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1818f60) on tqpair=0x17e26c0 00:45:23.262 [2024-07-22 13:07:42.587523] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:45:23.262 [2024-07-22 13:07:42.587529] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:45:23.262 [2024-07-22 13:07:42.587534] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:45:23.262 [2024-07-22 13:07:42.587539] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:45:23.262 [2024-07-22 13:07:42.587544] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:45:23.262 [2024-07-22 13:07:42.587549] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:45:23.262 [2024-07-22 13:07:42.587563] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:45:23.262 [2024-07-22 13:07:42.587571] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:23.262 [2024-07-22 13:07:42.587576] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:23.262 [2024-07-22 13:07:42.587580] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17e26c0) 00:45:23.262 [2024-07-22 13:07:42.587588] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:45:23.262 [2024-07-22 13:07:42.587609] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1818f60, cid 0, qid 0 00:45:23.262 [2024-07-22 13:07:42.587667] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:45:23.262 [2024-07-22 13:07:42.587674] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:45:23.262 [2024-07-22 13:07:42.587678] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:45:23.262 [2024-07-22 13:07:42.587683] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1818f60) on tqpair=0x17e26c0 00:45:23.262 [2024-07-22 13:07:42.587692] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:23.262 [2024-07-22 13:07:42.587696] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:23.262 [2024-07-22 13:07:42.587701] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17e26c0) 00:45:23.262 [2024-07-22 13:07:42.587707] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:45:23.262 [2024-07-22 13:07:42.587714] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:23.262 [2024-07-22 13:07:42.587718] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:23.262 [2024-07-22 13:07:42.587722] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x17e26c0) 00:45:23.262 [2024-07-22 13:07:42.587728] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:45:23.262 [2024-07-22 13:07:42.587734] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:23.262 [2024-07-22 13:07:42.587738] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:23.262 [2024-07-22 13:07:42.587744] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x17e26c0) 00:45:23.262 [2024-07-22 13:07:42.587750] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:45:23.262 [2024-07-22 13:07:42.587757] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:23.262 [2024-07-22 13:07:42.587761] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:23.262 [2024-07-22 13:07:42.587765] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17e26c0) 00:45:23.262 [2024-07-22 13:07:42.587771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:45:23.262 [2024-07-22 13:07:42.587776] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:45:23.262 [2024-07-22 13:07:42.587789] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:45:23.262 [2024-07-22 13:07:42.587801] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:23.262 [2024-07-22 13:07:42.587806] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:23.262 [2024-07-22 13:07:42.587810] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x17e26c0) 00:45:23.262 [2024-07-22 13:07:42.587817] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:23.262 [2024-07-22 13:07:42.587839] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1818f60, cid 0, qid 0 00:45:23.262 [2024-07-22 13:07:42.587846] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18190c0, cid 1, qid 0 00:45:23.262 [2024-07-22 13:07:42.587851] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1819220, cid 2, qid 0 00:45:23.262 [2024-07-22 13:07:42.587856] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1819380, cid 3, qid 0 00:45:23.262 [2024-07-22 13:07:42.587861] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18194e0, cid 4, qid 0 00:45:23.262 [2024-07-22 13:07:42.587960] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:45:23.262 [2024-07-22 13:07:42.587967] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:45:23.262 [2024-07-22 13:07:42.587971] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:45:23.262 [2024-07-22 13:07:42.587975] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18194e0) on tqpair=0x17e26c0 00:45:23.262 [2024-07-22 13:07:42.587982] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:45:23.262 [2024-07-22 13:07:42.587988] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:45:23.262 [2024-07-22 13:07:42.587997] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:45:23.262 [2024-07-22 13:07:42.588008] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:45:23.262 [2024-07-22 13:07:42.588016] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:23.262 [2024-07-22 13:07:42.588020] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:23.262 [2024-07-22 13:07:42.588025] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x17e26c0) 00:45:23.262 [2024-07-22 13:07:42.588032] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:45:23.262 [2024-07-22 13:07:42.588052] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18194e0, cid 4, qid 0 00:45:23.262 [2024-07-22 13:07:42.588112] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:45:23.262 [2024-07-22 13:07:42.588119] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:45:23.262 [2024-07-22 13:07:42.588123] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:45:23.262 [2024-07-22 13:07:42.588128] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18194e0) on tqpair=0x17e26c0 00:45:23.262 [2024-07-22 13:07:42.588202] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:45:23.262 [2024-07-22 13:07:42.588231] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:45:23.262 [2024-07-22 13:07:42.588255] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:23.262 [2024-07-22 13:07:42.588260] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:23.262 [2024-07-22 13:07:42.588264] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x17e26c0) 00:45:23.262 [2024-07-22 13:07:42.588272] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:23.262 [2024-07-22 13:07:42.588294] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18194e0, cid 4, qid 0 00:45:23.262 [2024-07-22 13:07:42.588363] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:45:23.262 [2024-07-22 13:07:42.588371] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:45:23.262 [2024-07-22 13:07:42.588375] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:45:23.262 [2024-07-22 13:07:42.588379] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x17e26c0): datao=0, datal=4096, cccid=4 00:45:23.262 [2024-07-22 13:07:42.588384] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18194e0) on tqpair(0x17e26c0): expected_datao=0, payload_size=4096 00:45:23.262 [2024-07-22 13:07:42.588392] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:45:23.262 [2024-07-22 13:07:42.588397] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:45:23.262 [2024-07-22 13:07:42.588406] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:45:23.262 [2024-07-22 13:07:42.588412] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:45:23.262 [2024-07-22 13:07:42.588416] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:45:23.262 [2024-07-22 13:07:42.588420] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18194e0) on tqpair=0x17e26c0 00:45:23.262 [2024-07-22 13:07:42.588437] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:45:23.262 [2024-07-22 13:07:42.588448] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:45:23.262 [2024-07-22 13:07:42.588459] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:45:23.262 [2024-07-22 13:07:42.588467] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:23.262 [2024-07-22 13:07:42.588471] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:23.262 [2024-07-22 13:07:42.588476] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x17e26c0) 00:45:23.262 [2024-07-22 13:07:42.588483] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:23.262 [2024-07-22 13:07:42.588504] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18194e0, cid 4, qid 0 00:45:23.262 [2024-07-22 13:07:42.588579] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:45:23.262 [2024-07-22 13:07:42.588586] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:45:23.262 [2024-07-22 13:07:42.588591] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:45:23.262 [2024-07-22 13:07:42.588595] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x17e26c0): datao=0, datal=4096, cccid=4 00:45:23.262 [2024-07-22 13:07:42.588600] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18194e0) on tqpair(0x17e26c0): expected_datao=0, payload_size=4096 00:45:23.262 [2024-07-22 13:07:42.588608] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:45:23.262 [2024-07-22 13:07:42.588612] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:45:23.262 [2024-07-22 13:07:42.588620] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:45:23.262 [2024-07-22 13:07:42.588627] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:45:23.262 [2024-07-22 13:07:42.588631] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:45:23.262 [2024-07-22 13:07:42.588635] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18194e0) on tqpair=0x17e26c0 00:45:23.262 [2024-07-22 13:07:42.588651] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:45:23.262 [2024-07-22 13:07:42.588663] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:45:23.262 [2024-07-22 13:07:42.588672] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:23.262 [2024-07-22 13:07:42.588676] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:23.262 [2024-07-22 13:07:42.588680] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x17e26c0) 00:45:23.262 [2024-07-22 13:07:42.588688] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:23.262 [2024-07-22 13:07:42.588709] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18194e0, cid 4, qid 0 00:45:23.262 [2024-07-22 13:07:42.588773] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:45:23.262 [2024-07-22 13:07:42.588779] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:45:23.262 [2024-07-22 13:07:42.588784] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:45:23.262 [2024-07-22 13:07:42.588788] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x17e26c0): datao=0, datal=4096, cccid=4 00:45:23.262 [2024-07-22 13:07:42.588793] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18194e0) on tqpair(0x17e26c0): expected_datao=0, payload_size=4096 00:45:23.262 [2024-07-22 13:07:42.588801] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:45:23.262 [2024-07-22 13:07:42.588805] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:45:23.262 [2024-07-22 13:07:42.588814] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:45:23.262 [2024-07-22 13:07:42.588820] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:45:23.262 [2024-07-22 13:07:42.588824] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:45:23.262 [2024-07-22 13:07:42.588828] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18194e0) on tqpair=0x17e26c0 00:45:23.262 [2024-07-22 13:07:42.588838] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:45:23.262 [2024-07-22 13:07:42.588848] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:45:23.262 [2024-07-22 13:07:42.588859] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:45:23.262 [2024-07-22 13:07:42.588865] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:45:23.262 [2024-07-22 13:07:42.588871] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:45:23.262 [2024-07-22 13:07:42.588877] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:45:23.262 [2024-07-22 13:07:42.588882] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:45:23.262 [2024-07-22 13:07:42.588888] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:45:23.262 [2024-07-22 13:07:42.588921] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:23.262 [2024-07-22 13:07:42.588932] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:23.262 [2024-07-22 13:07:42.588937] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x17e26c0) 00:45:23.262 [2024-07-22 13:07:42.588945] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:23.262 [2024-07-22 13:07:42.588953] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:23.262 [2024-07-22 13:07:42.588957] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:23.262 [2024-07-22 13:07:42.588961] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x17e26c0) 00:45:23.262 [2024-07-22 13:07:42.588968] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:45:23.262 [2024-07-22 13:07:42.588999] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18194e0, cid 4, qid 0 00:45:23.262 [2024-07-22 13:07:42.589007] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1819640, cid 5, qid 0 00:45:23.262 [2024-07-22 13:07:42.589093] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:45:23.262 [2024-07-22 13:07:42.589100] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:45:23.262 [2024-07-22 13:07:42.589105] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:45:23.262 [2024-07-22 13:07:42.589109] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18194e0) on tqpair=0x17e26c0 00:45:23.262 [2024-07-22 13:07:42.589117] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:45:23.262 [2024-07-22 13:07:42.589124] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:45:23.262 [2024-07-22 13:07:42.589128] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:45:23.262 [2024-07-22 13:07:42.589132] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1819640) on tqpair=0x17e26c0 00:45:23.262 [2024-07-22 13:07:42.589159] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:23.262 [2024-07-22 13:07:42.589165] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:23.262 [2024-07-22 13:07:42.589170] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x17e26c0) 00:45:23.262 [2024-07-22 13:07:42.589177] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:23.262 [2024-07-22 13:07:42.589199] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1819640, cid 5, qid 0 00:45:23.262 [2024-07-22 13:07:42.589262] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:45:23.262 [2024-07-22 13:07:42.589270] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:45:23.262 [2024-07-22 13:07:42.589274] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:45:23.262 [2024-07-22 13:07:42.589278] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1819640) on tqpair=0x17e26c0 00:45:23.262 [2024-07-22 13:07:42.589290] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:23.262 [2024-07-22 13:07:42.589295] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:23.262 [2024-07-22 13:07:42.589299] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x17e26c0) 00:45:23.262 [2024-07-22 13:07:42.589307] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:23.262 [2024-07-22 13:07:42.589326] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1819640, cid 5, qid 0 00:45:23.262 [2024-07-22 13:07:42.589399] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:45:23.262 [2024-07-22 13:07:42.589406] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:45:23.262 [2024-07-22 13:07:42.589411] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:45:23.262 [2024-07-22 13:07:42.589415] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1819640) on tqpair=0x17e26c0 00:45:23.262 [2024-07-22 13:07:42.589426] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:23.262 [2024-07-22 13:07:42.589432] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:23.262 [2024-07-22 13:07:42.589436] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x17e26c0) 00:45:23.262 [2024-07-22 13:07:42.589443] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:23.262 [2024-07-22 13:07:42.589461] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1819640, cid 5, qid 0 00:45:23.262 [2024-07-22 13:07:42.589535] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:45:23.262 [2024-07-22 13:07:42.589542] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:45:23.262 [2024-07-22 13:07:42.589546] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:45:23.262 [2024-07-22 13:07:42.589551] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1819640) on tqpair=0x17e26c0 00:45:23.263 [2024-07-22 13:07:42.589566] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:23.263 [2024-07-22 13:07:42.589572] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:23.263 [2024-07-22 13:07:42.589576] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x17e26c0) 00:45:23.263 [2024-07-22 13:07:42.589584] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:23.263 [2024-07-22 13:07:42.589591] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:23.263 [2024-07-22 13:07:42.589595] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:23.263 [2024-07-22 13:07:42.589599] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x17e26c0) 00:45:23.263 [2024-07-22 13:07:42.589606] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:23.263 [2024-07-22 13:07:42.589613] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:23.263 [2024-07-22 13:07:42.589617] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:23.263 [2024-07-22 13:07:42.589622] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x17e26c0) 00:45:23.263 [2024-07-22 13:07:42.589628] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:23.263 [2024-07-22 13:07:42.589636] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:23.263 [2024-07-22 13:07:42.589640] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:23.263 [2024-07-22 13:07:42.589644] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x17e26c0) 00:45:23.263 [2024-07-22 13:07:42.589651] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:23.263 [2024-07-22 13:07:42.589672] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1819640, cid 5, qid 0 00:45:23.263 [2024-07-22 13:07:42.589679] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18194e0, cid 4, qid 0 00:45:23.263 [2024-07-22 13:07:42.589684] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18197a0, cid 6, qid 0 00:45:23.263 [2024-07-22 13:07:42.589689] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1819900, cid 7, qid 0 00:45:23.263 [2024-07-22 13:07:42.589830] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:45:23.263 [2024-07-22 13:07:42.589837] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:45:23.263 [2024-07-22 13:07:42.589841] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:45:23.263 [2024-07-22 13:07:42.589846] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x17e26c0): datao=0, datal=8192, cccid=5 00:45:23.263 [2024-07-22 13:07:42.589851] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1819640) on tqpair(0x17e26c0): expected_datao=0, payload_size=8192 00:45:23.263 [2024-07-22 13:07:42.589868] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:45:23.263 [2024-07-22 13:07:42.589873] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:45:23.263 [2024-07-22 13:07:42.589879] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:45:23.263 [2024-07-22 13:07:42.589885] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:45:23.263 [2024-07-22 13:07:42.589889] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:45:23.263 [2024-07-22 13:07:42.589894] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x17e26c0): datao=0, datal=512, cccid=4 00:45:23.263 [2024-07-22 13:07:42.589898] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18194e0) on tqpair(0x17e26c0): expected_datao=0, payload_size=512 00:45:23.263 [2024-07-22 13:07:42.589906] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:45:23.263 [2024-07-22 13:07:42.589910] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:45:23.263 [2024-07-22 13:07:42.589915] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:45:23.263 [2024-07-22 13:07:42.589921] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:45:23.263 [2024-07-22 13:07:42.589925] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:45:23.263 [2024-07-22 13:07:42.589929] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x17e26c0): datao=0, datal=512, cccid=6 00:45:23.263 [2024-07-22 13:07:42.589934] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18197a0) on tqpair(0x17e26c0): expected_datao=0, payload_size=512 00:45:23.263 [2024-07-22 13:07:42.589941] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:45:23.263 [2024-07-22 13:07:42.589945] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:45:23.263 [2024-07-22 13:07:42.589950] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:45:23.263 [2024-07-22 13:07:42.589956] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:45:23.263 [2024-07-22 13:07:42.589960] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:45:23.263 [2024-07-22 13:07:42.589964] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x17e26c0): datao=0, datal=4096, cccid=7 00:45:23.263 [2024-07-22 13:07:42.589969] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1819900) on tqpair(0x17e26c0): expected_datao=0, payload_size=4096 00:45:23.263 [2024-07-22 13:07:42.589976] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:45:23.263 [2024-07-22 13:07:42.589980] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:45:23.263 [2024-07-22 13:07:42.589989] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:45:23.263 [2024-07-22 13:07:42.589995] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:45:23.263 [2024-07-22 13:07:42.589998] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:45:23.263 [2024-07-22 13:07:42.590003] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1819640) on tqpair=0x17e26c0 00:45:23.263 [2024-07-22 13:07:42.590020] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:45:23.263 [2024-07-22 13:07:42.590028] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:45:23.263 [2024-07-22 13:07:42.590032] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:45:23.263 [2024-07-22 13:07:42.590036] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18194e0) on tqpair=0x17e26c0 00:45:23.263 [2024-07-22 13:07:42.590047] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:45:23.263 [2024-07-22 13:07:42.590054] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:45:23.263 [2024-07-22 13:07:42.590059] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:45:23.263 [2024-07-22 13:07:42.590064] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18197a0) on tqpair=0x17e26c0 00:45:23.263 ===================================================== 00:45:23.263 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:45:23.263 ===================================================== 00:45:23.263 Controller Capabilities/Features 00:45:23.263 ================================ 00:45:23.263 Vendor ID: 8086 00:45:23.263 Subsystem Vendor ID: 8086 00:45:23.263 Serial Number: SPDK00000000000001 00:45:23.263 Model Number: SPDK bdev Controller 00:45:23.263 Firmware Version: 24.01.1 00:45:23.263 Recommended Arb Burst: 6 00:45:23.263 IEEE OUI Identifier: e4 d2 5c 00:45:23.263 Multi-path I/O 00:45:23.263 May have multiple subsystem ports: Yes 00:45:23.263 May have multiple controllers: Yes 00:45:23.263 Associated with SR-IOV VF: No 00:45:23.263 Max Data Transfer Size: 131072 00:45:23.263 Max Number of Namespaces: 32 00:45:23.263 Max Number of I/O Queues: 127 00:45:23.263 NVMe Specification Version (VS): 1.3 00:45:23.263 NVMe Specification Version (Identify): 1.3 00:45:23.263 Maximum Queue Entries: 128 00:45:23.263 Contiguous Queues Required: Yes 00:45:23.263 Arbitration Mechanisms Supported 00:45:23.263 Weighted Round Robin: Not Supported 00:45:23.263 Vendor Specific: Not Supported 00:45:23.263 Reset Timeout: 15000 ms 00:45:23.263 Doorbell Stride: 4 bytes 00:45:23.263 NVM Subsystem Reset: Not Supported 00:45:23.263 Command Sets Supported 00:45:23.263 NVM Command Set: Supported 00:45:23.263 Boot Partition: Not Supported 00:45:23.263 Memory Page Size Minimum: 4096 bytes 00:45:23.263 Memory Page Size Maximum: 4096 bytes 00:45:23.263 Persistent Memory Region: Not Supported 00:45:23.263 Optional Asynchronous Events Supported 00:45:23.263 Namespace Attribute Notices: Supported 00:45:23.263 Firmware Activation Notices: Not Supported 00:45:23.263 ANA Change Notices: Not Supported 00:45:23.263 PLE Aggregate Log Change Notices: Not Supported 00:45:23.263 LBA Status Info Alert Notices: Not Supported 00:45:23.263 EGE Aggregate Log Change Notices: Not Supported 00:45:23.263 Normal NVM Subsystem Shutdown event: Not Supported 00:45:23.263 Zone Descriptor Change Notices: Not Supported 00:45:23.263 Discovery Log Change Notices: Not Supported 00:45:23.263 Controller Attributes 00:45:23.263 128-bit Host Identifier: Supported 00:45:23.263 Non-Operational Permissive Mode: Not Supported 00:45:23.263 NVM Sets: Not Supported 00:45:23.263 Read Recovery Levels: Not Supported 00:45:23.263 Endurance Groups: Not Supported 00:45:23.263 Predictable Latency Mode: Not Supported 00:45:23.263 Traffic Based Keep ALive: Not Supported 00:45:23.263 Namespace Granularity: Not Supported 00:45:23.263 SQ Associations: Not Supported 00:45:23.263 UUID List: Not Supported 00:45:23.263 Multi-Domain Subsystem: Not Supported 00:45:23.263 Fixed Capacity Management: Not Supported 00:45:23.263 Variable Capacity Management: Not Supported 00:45:23.263 Delete Endurance Group: Not Supported 00:45:23.263 Delete NVM Set: Not Supported 00:45:23.263 Extended LBA Formats Supported: Not Supported 00:45:23.263 Flexible Data Placement Supported: Not Supported 00:45:23.263 00:45:23.263 Controller Memory Buffer Support 00:45:23.263 ================================ 00:45:23.263 Supported: No 00:45:23.263 00:45:23.263 Persistent Memory Region Support 00:45:23.263 ================================ 00:45:23.263 Supported: No 00:45:23.263 00:45:23.263 Admin Command Set Attributes 00:45:23.263 ============================ 00:45:23.263 Security Send/Receive: Not Supported 00:45:23.263 Format NVM: Not Supported 00:45:23.263 Firmware Activate/Download: Not Supported 00:45:23.263 Namespace Management: Not Supported 00:45:23.263 Device Self-Test: Not Supported 00:45:23.263 Directives: Not Supported 00:45:23.263 NVMe-MI: Not Supported 00:45:23.263 Virtualization Management: Not Supported 00:45:23.263 Doorbell Buffer Config: Not Supported 00:45:23.263 Get LBA Status Capability: Not Supported 00:45:23.263 Command & Feature Lockdown Capability: Not Supported 00:45:23.263 Abort Command Limit: 4 00:45:23.263 Async Event Request Limit: 4 00:45:23.263 Number of Firmware Slots: N/A 00:45:23.263 Firmware Slot 1 Read-Only: N/A 00:45:23.263 Firmware Activation Without Reset: [2024-07-22 13:07:42.590073] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:45:23.263 [2024-07-22 13:07:42.590080] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:45:23.263 [2024-07-22 13:07:42.590084] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:45:23.263 [2024-07-22 13:07:42.590088] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1819900) on tqpair=0x17e26c0 00:45:23.263 N/A 00:45:23.263 Multiple Update Detection Support: N/A 00:45:23.263 Firmware Update Granularity: No Information Provided 00:45:23.263 Per-Namespace SMART Log: No 00:45:23.263 Asymmetric Namespace Access Log Page: Not Supported 00:45:23.263 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:45:23.263 Command Effects Log Page: Supported 00:45:23.263 Get Log Page Extended Data: Supported 00:45:23.263 Telemetry Log Pages: Not Supported 00:45:23.263 Persistent Event Log Pages: Not Supported 00:45:23.263 Supported Log Pages Log Page: May Support 00:45:23.263 Commands Supported & Effects Log Page: Not Supported 00:45:23.263 Feature Identifiers & Effects Log Page:May Support 00:45:23.263 NVMe-MI Commands & Effects Log Page: May Support 00:45:23.263 Data Area 4 for Telemetry Log: Not Supported 00:45:23.263 Error Log Page Entries Supported: 128 00:45:23.263 Keep Alive: Supported 00:45:23.263 Keep Alive Granularity: 10000 ms 00:45:23.263 00:45:23.263 NVM Command Set Attributes 00:45:23.263 ========================== 00:45:23.263 Submission Queue Entry Size 00:45:23.263 Max: 64 00:45:23.263 Min: 64 00:45:23.263 Completion Queue Entry Size 00:45:23.263 Max: 16 00:45:23.263 Min: 16 00:45:23.263 Number of Namespaces: 32 00:45:23.263 Compare Command: Supported 00:45:23.263 Write Uncorrectable Command: Not Supported 00:45:23.263 Dataset Management Command: Supported 00:45:23.263 Write Zeroes Command: Supported 00:45:23.263 Set Features Save Field: Not Supported 00:45:23.263 Reservations: Supported 00:45:23.263 Timestamp: Not Supported 00:45:23.263 Copy: Supported 00:45:23.263 Volatile Write Cache: Present 00:45:23.263 Atomic Write Unit (Normal): 1 00:45:23.263 Atomic Write Unit (PFail): 1 00:45:23.263 Atomic Compare & Write Unit: 1 00:45:23.263 Fused Compare & Write: Supported 00:45:23.263 Scatter-Gather List 00:45:23.263 SGL Command Set: Supported 00:45:23.263 SGL Keyed: Supported 00:45:23.263 SGL Bit Bucket Descriptor: Not Supported 00:45:23.263 SGL Metadata Pointer: Not Supported 00:45:23.263 Oversized SGL: Not Supported 00:45:23.263 SGL Metadata Address: Not Supported 00:45:23.263 SGL Offset: Supported 00:45:23.263 Transport SGL Data Block: Not Supported 00:45:23.263 Replay Protected Memory Block: Not Supported 00:45:23.263 00:45:23.263 Firmware Slot Information 00:45:23.263 ========================= 00:45:23.263 Active slot: 1 00:45:23.263 Slot 1 Firmware Revision: 24.01.1 00:45:23.263 00:45:23.263 00:45:23.263 Commands Supported and Effects 00:45:23.263 ============================== 00:45:23.263 Admin Commands 00:45:23.263 -------------- 00:45:23.263 Get Log Page (02h): Supported 00:45:23.263 Identify (06h): Supported 00:45:23.263 Abort (08h): Supported 00:45:23.263 Set Features (09h): Supported 00:45:23.263 Get Features (0Ah): Supported 00:45:23.263 Asynchronous Event Request (0Ch): Supported 00:45:23.263 Keep Alive (18h): Supported 00:45:23.263 I/O Commands 00:45:23.263 ------------ 00:45:23.263 Flush (00h): Supported LBA-Change 00:45:23.263 Write (01h): Supported LBA-Change 00:45:23.263 Read (02h): Supported 00:45:23.263 Compare (05h): Supported 00:45:23.263 Write Zeroes (08h): Supported LBA-Change 00:45:23.263 Dataset Management (09h): Supported LBA-Change 00:45:23.263 Copy (19h): Supported LBA-Change 00:45:23.263 Unknown (79h): Supported LBA-Change 00:45:23.263 Unknown (7Ah): Supported 00:45:23.263 00:45:23.263 Error Log 00:45:23.263 ========= 00:45:23.263 00:45:23.263 Arbitration 00:45:23.263 =========== 00:45:23.263 Arbitration Burst: 1 00:45:23.263 00:45:23.263 Power Management 00:45:23.263 ================ 00:45:23.263 Number of Power States: 1 00:45:23.263 Current Power State: Power State #0 00:45:23.263 Power State #0: 00:45:23.263 Max Power: 0.00 W 00:45:23.263 Non-Operational State: Operational 00:45:23.263 Entry Latency: Not Reported 00:45:23.263 Exit Latency: Not Reported 00:45:23.263 Relative Read Throughput: 0 00:45:23.263 Relative Read Latency: 0 00:45:23.263 Relative Write Throughput: 0 00:45:23.263 Relative Write Latency: 0 00:45:23.263 Idle Power: Not Reported 00:45:23.263 Active Power: Not Reported 00:45:23.263 Non-Operational Permissive Mode: Not Supported 00:45:23.263 00:45:23.263 Health Information 00:45:23.263 ================== 00:45:23.263 Critical Warnings: 00:45:23.263 Available Spare Space: OK 00:45:23.263 Temperature: OK 00:45:23.263 Device Reliability: OK 00:45:23.263 Read Only: No 00:45:23.263 Volatile Memory Backup: OK 00:45:23.263 Current Temperature: 0 Kelvin (-273 Celsius) 00:45:23.263 Temperature Threshold: [2024-07-22 13:07:42.594201] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:23.263 [2024-07-22 13:07:42.594212] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:23.263 [2024-07-22 13:07:42.594216] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x17e26c0) 00:45:23.263 [2024-07-22 13:07:42.594225] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:23.263 [2024-07-22 13:07:42.594253] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1819900, cid 7, qid 0 00:45:23.263 [2024-07-22 13:07:42.594332] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:45:23.263 [2024-07-22 13:07:42.594339] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:45:23.263 [2024-07-22 13:07:42.594344] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:45:23.264 [2024-07-22 13:07:42.594348] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1819900) on tqpair=0x17e26c0 00:45:23.264 [2024-07-22 13:07:42.594410] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:45:23.264 [2024-07-22 13:07:42.594430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:23.264 [2024-07-22 13:07:42.594438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:23.264 [2024-07-22 13:07:42.594444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:23.264 [2024-07-22 13:07:42.594450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:23.264 [2024-07-22 13:07:42.594459] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:23.264 [2024-07-22 13:07:42.594480] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:23.264 [2024-07-22 13:07:42.594484] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17e26c0) 00:45:23.264 [2024-07-22 13:07:42.594492] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:23.264 [2024-07-22 13:07:42.594531] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1819380, cid 3, qid 0 00:45:23.264 [2024-07-22 13:07:42.594608] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:45:23.264 [2024-07-22 13:07:42.594617] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:45:23.264 [2024-07-22 13:07:42.594622] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:45:23.264 [2024-07-22 13:07:42.594626] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1819380) on tqpair=0x17e26c0 00:45:23.264 [2024-07-22 13:07:42.594636] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:23.264 [2024-07-22 13:07:42.594640] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:23.264 [2024-07-22 13:07:42.594645] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17e26c0) 00:45:23.264 [2024-07-22 13:07:42.594652] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:23.264 [2024-07-22 13:07:42.594677] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1819380, cid 3, qid 0 00:45:23.264 [2024-07-22 13:07:42.594746] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:45:23.264 [2024-07-22 13:07:42.594752] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:45:23.264 [2024-07-22 13:07:42.594757] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:45:23.264 [2024-07-22 13:07:42.594761] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1819380) on tqpair=0x17e26c0 00:45:23.264 [2024-07-22 13:07:42.594767] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:45:23.264 [2024-07-22 13:07:42.594773] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:45:23.264 [2024-07-22 13:07:42.594783] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:23.264 [2024-07-22 13:07:42.594788] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:23.264 [2024-07-22 13:07:42.594792] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17e26c0) 00:45:23.264 [2024-07-22 13:07:42.594800] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:23.264 [2024-07-22 13:07:42.594818] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1819380, cid 3, qid 0 00:45:23.264 [2024-07-22 13:07:42.594874] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:45:23.264 [2024-07-22 13:07:42.594881] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:45:23.264 [2024-07-22 13:07:42.594885] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:45:23.264 [2024-07-22 13:07:42.594889] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1819380) on tqpair=0x17e26c0 00:45:23.264 [2024-07-22 13:07:42.594901] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:23.264 [2024-07-22 13:07:42.594907] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:23.264 [2024-07-22 13:07:42.594911] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17e26c0) 00:45:23.264 [2024-07-22 13:07:42.594918] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:23.264 [2024-07-22 13:07:42.594937] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1819380, cid 3, qid 0 00:45:23.264 [2024-07-22 13:07:42.594990] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:45:23.264 [2024-07-22 13:07:42.594996] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:45:23.264 [2024-07-22 13:07:42.595001] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:45:23.264 [2024-07-22 13:07:42.595005] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1819380) on tqpair=0x17e26c0 00:45:23.264 [2024-07-22 13:07:42.595017] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:23.264 [2024-07-22 13:07:42.595022] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:23.264 [2024-07-22 13:07:42.595026] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17e26c0) 00:45:23.264 [2024-07-22 13:07:42.595033] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:23.264 [2024-07-22 13:07:42.595052] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1819380, cid 3, qid 0 00:45:23.264 [2024-07-22 13:07:42.595105] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:45:23.264 [2024-07-22 13:07:42.595112] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:45:23.264 [2024-07-22 13:07:42.595116] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:45:23.264 [2024-07-22 13:07:42.595121] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1819380) on tqpair=0x17e26c0 00:45:23.264 [2024-07-22 13:07:42.595132] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:23.264 [2024-07-22 13:07:42.595151] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:23.264 [2024-07-22 13:07:42.595156] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17e26c0) 00:45:23.264 [2024-07-22 13:07:42.595164] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:23.264 [2024-07-22 13:07:42.595186] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1819380, cid 3, qid 0 00:45:23.264 [2024-07-22 13:07:42.595243] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:45:23.264 [2024-07-22 13:07:42.595250] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:45:23.264 [2024-07-22 13:07:42.595254] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:45:23.264 [2024-07-22 13:07:42.595259] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1819380) on tqpair=0x17e26c0 00:45:23.264 [2024-07-22 13:07:42.595271] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:23.264 [2024-07-22 13:07:42.595276] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:23.265 [2024-07-22 13:07:42.595280] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17e26c0) 00:45:23.265 [2024-07-22 13:07:42.595288] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:23.265 [2024-07-22 13:07:42.595307] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1819380, cid 3, qid 0 00:45:23.265 [2024-07-22 13:07:42.595363] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:45:23.265 [2024-07-22 13:07:42.595370] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:45:23.265 [2024-07-22 13:07:42.595374] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:45:23.265 [2024-07-22 13:07:42.595378] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1819380) on tqpair=0x17e26c0 00:45:23.265 [2024-07-22 13:07:42.595390] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:23.265 [2024-07-22 13:07:42.595395] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:23.265 [2024-07-22 13:07:42.595400] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17e26c0) 00:45:23.265 [2024-07-22 13:07:42.595407] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:23.265 [2024-07-22 13:07:42.595426] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1819380, cid 3, qid 0 00:45:23.265 [2024-07-22 13:07:42.595476] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:45:23.265 [2024-07-22 13:07:42.595483] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:45:23.265 [2024-07-22 13:07:42.595487] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:45:23.265 [2024-07-22 13:07:42.595491] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1819380) on tqpair=0x17e26c0 00:45:23.265 [2024-07-22 13:07:42.595503] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:23.265 [2024-07-22 13:07:42.595508] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:23.265 [2024-07-22 13:07:42.595512] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17e26c0) 00:45:23.265 [2024-07-22 13:07:42.595520] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:23.265 [2024-07-22 13:07:42.595538] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1819380, cid 3, qid 0 00:45:23.265 [2024-07-22 13:07:42.595598] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:45:23.265 [2024-07-22 13:07:42.595605] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:45:23.265 [2024-07-22 13:07:42.595609] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:45:23.265 [2024-07-22 13:07:42.595613] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1819380) on tqpair=0x17e26c0 00:45:23.265 [2024-07-22 13:07:42.595625] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:23.265 [2024-07-22 13:07:42.595630] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:23.265 [2024-07-22 13:07:42.595634] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17e26c0) 00:45:23.265 [2024-07-22 13:07:42.595642] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:23.265 [2024-07-22 13:07:42.595660] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1819380, cid 3, qid 0 00:45:23.265 [2024-07-22 13:07:42.595715] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:45:23.265 [2024-07-22 13:07:42.595722] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:45:23.265 [2024-07-22 13:07:42.595727] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:45:23.265 [2024-07-22 13:07:42.595731] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1819380) on tqpair=0x17e26c0 00:45:23.265 [2024-07-22 13:07:42.595743] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:23.265 [2024-07-22 13:07:42.595748] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:23.265 [2024-07-22 13:07:42.595752] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17e26c0) 00:45:23.265 [2024-07-22 13:07:42.595759] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:23.265 [2024-07-22 13:07:42.595778] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1819380, cid 3, qid 0 00:45:23.265 [2024-07-22 13:07:42.595833] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:45:23.265 [2024-07-22 13:07:42.595840] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:45:23.265 [2024-07-22 13:07:42.595844] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:45:23.265 [2024-07-22 13:07:42.595849] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1819380) on tqpair=0x17e26c0 00:45:23.265 [2024-07-22 13:07:42.595860] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:23.265 [2024-07-22 13:07:42.595865] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:23.265 [2024-07-22 13:07:42.595869] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17e26c0) 00:45:23.265 [2024-07-22 13:07:42.595877] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:23.265 [2024-07-22 13:07:42.595895] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1819380, cid 3, qid 0 00:45:23.265 [2024-07-22 13:07:42.595949] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:45:23.265 [2024-07-22 13:07:42.595956] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:45:23.265 [2024-07-22 13:07:42.595960] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:45:23.265 [2024-07-22 13:07:42.595965] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1819380) on tqpair=0x17e26c0 00:45:23.265 [2024-07-22 13:07:42.595976] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:23.265 [2024-07-22 13:07:42.595981] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:23.265 [2024-07-22 13:07:42.595985] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17e26c0) 00:45:23.265 [2024-07-22 13:07:42.595993] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:23.265 [2024-07-22 13:07:42.596011] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1819380, cid 3, qid 0 00:45:23.265 [2024-07-22 13:07:42.596062] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:45:23.265 [2024-07-22 13:07:42.596069] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:45:23.265 [2024-07-22 13:07:42.596073] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:45:23.265 [2024-07-22 13:07:42.596077] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1819380) on tqpair=0x17e26c0 00:45:23.265 [2024-07-22 13:07:42.596089] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:23.265 [2024-07-22 13:07:42.596094] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:23.265 [2024-07-22 13:07:42.596098] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17e26c0) 00:45:23.265 [2024-07-22 13:07:42.596105] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:23.265 [2024-07-22 13:07:42.596124] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1819380, cid 3, qid 0 00:45:23.265 [2024-07-22 13:07:42.596189] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:45:23.265 [2024-07-22 13:07:42.596198] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:45:23.265 [2024-07-22 13:07:42.596202] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:45:23.265 [2024-07-22 13:07:42.596207] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1819380) on tqpair=0x17e26c0 00:45:23.265 [2024-07-22 13:07:42.596219] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:23.265 [2024-07-22 13:07:42.596224] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:23.265 [2024-07-22 13:07:42.596228] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17e26c0) 00:45:23.265 [2024-07-22 13:07:42.596236] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:23.265 [2024-07-22 13:07:42.596257] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1819380, cid 3, qid 0 00:45:23.265 [2024-07-22 13:07:42.596313] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:45:23.265 [2024-07-22 13:07:42.596320] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:45:23.265 [2024-07-22 13:07:42.596325] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:45:23.265 [2024-07-22 13:07:42.596329] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1819380) on tqpair=0x17e26c0 00:45:23.265 [2024-07-22 13:07:42.596341] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:23.265 [2024-07-22 13:07:42.596346] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:23.265 [2024-07-22 13:07:42.596350] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17e26c0) 00:45:23.265 [2024-07-22 13:07:42.596357] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:23.265 [2024-07-22 13:07:42.596376] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1819380, cid 3, qid 0 00:45:23.265 [2024-07-22 13:07:42.596427] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:45:23.265 [2024-07-22 13:07:42.596434] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:45:23.265 [2024-07-22 13:07:42.596438] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:45:23.265 [2024-07-22 13:07:42.596442] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1819380) on tqpair=0x17e26c0 00:45:23.265 [2024-07-22 13:07:42.596454] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:23.265 [2024-07-22 13:07:42.596459] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:23.265 [2024-07-22 13:07:42.596463] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17e26c0) 00:45:23.265 [2024-07-22 13:07:42.596470] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:23.265 [2024-07-22 13:07:42.596489] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1819380, cid 3, qid 0 00:45:23.265 [2024-07-22 13:07:42.596543] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:45:23.265 [2024-07-22 13:07:42.596549] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:45:23.265 [2024-07-22 13:07:42.596553] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:45:23.265 [2024-07-22 13:07:42.596558] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1819380) on tqpair=0x17e26c0 00:45:23.265 [2024-07-22 13:07:42.596569] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:23.265 [2024-07-22 13:07:42.596574] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:23.265 [2024-07-22 13:07:42.596579] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17e26c0) 00:45:23.265 [2024-07-22 13:07:42.596586] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:23.265 [2024-07-22 13:07:42.596604] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1819380, cid 3, qid 0 00:45:23.265 [2024-07-22 13:07:42.596660] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:45:23.265 [2024-07-22 13:07:42.596667] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:45:23.265 [2024-07-22 13:07:42.596671] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:45:23.265 [2024-07-22 13:07:42.596676] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1819380) on tqpair=0x17e26c0 00:45:23.265 [2024-07-22 13:07:42.596687] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:23.265 [2024-07-22 13:07:42.596692] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:23.265 [2024-07-22 13:07:42.596696] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17e26c0) 00:45:23.265 [2024-07-22 13:07:42.596704] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:23.265 [2024-07-22 13:07:42.596722] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1819380, cid 3, qid 0 00:45:23.265 [2024-07-22 13:07:42.596778] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:45:23.265 [2024-07-22 13:07:42.596785] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:45:23.265 [2024-07-22 13:07:42.596789] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:45:23.265 [2024-07-22 13:07:42.596794] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1819380) on tqpair=0x17e26c0 00:45:23.265 [2024-07-22 13:07:42.596806] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:23.265 [2024-07-22 13:07:42.596810] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:23.265 [2024-07-22 13:07:42.596815] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17e26c0) 00:45:23.265 [2024-07-22 13:07:42.596822] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:23.265 [2024-07-22 13:07:42.596840] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1819380, cid 3, qid 0 00:45:23.265 [2024-07-22 13:07:42.596895] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:45:23.265 [2024-07-22 13:07:42.596902] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:45:23.265 [2024-07-22 13:07:42.596906] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:45:23.265 [2024-07-22 13:07:42.596910] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1819380) on tqpair=0x17e26c0 00:45:23.265 [2024-07-22 13:07:42.596922] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:23.265 [2024-07-22 13:07:42.596927] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:23.265 [2024-07-22 13:07:42.596931] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17e26c0) 00:45:23.265 [2024-07-22 13:07:42.596939] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:23.265 [2024-07-22 13:07:42.596957] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1819380, cid 3, qid 0 00:45:23.265 [2024-07-22 13:07:42.597009] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:45:23.265 [2024-07-22 13:07:42.597016] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:45:23.265 [2024-07-22 13:07:42.597020] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:45:23.265 [2024-07-22 13:07:42.597025] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1819380) on tqpair=0x17e26c0 00:45:23.265 [2024-07-22 13:07:42.597036] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:23.265 [2024-07-22 13:07:42.597041] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:23.265 [2024-07-22 13:07:42.597045] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17e26c0) 00:45:23.265 [2024-07-22 13:07:42.597053] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:23.265 [2024-07-22 13:07:42.597071] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1819380, cid 3, qid 0 00:45:23.265 [2024-07-22 13:07:42.597133] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:45:23.265 [2024-07-22 13:07:42.597175] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:45:23.265 [2024-07-22 13:07:42.597181] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:45:23.265 [2024-07-22 13:07:42.597185] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1819380) on tqpair=0x17e26c0 00:45:23.265 [2024-07-22 13:07:42.597199] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:23.265 [2024-07-22 13:07:42.597204] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:23.265 [2024-07-22 13:07:42.597209] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17e26c0) 00:45:23.265 [2024-07-22 13:07:42.597217] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:23.265 [2024-07-22 13:07:42.597249] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1819380, cid 3, qid 0 00:45:23.265 [2024-07-22 13:07:42.597308] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:45:23.265 [2024-07-22 13:07:42.597315] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:45:23.265 [2024-07-22 13:07:42.597319] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:45:23.265 [2024-07-22 13:07:42.597324] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1819380) on tqpair=0x17e26c0 00:45:23.265 [2024-07-22 13:07:42.597336] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:23.265 [2024-07-22 13:07:42.597341] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:23.265 [2024-07-22 13:07:42.597345] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17e26c0) 00:45:23.265 [2024-07-22 13:07:42.597353] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:23.265 [2024-07-22 13:07:42.597373] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1819380, cid 3, qid 0 00:45:23.265 [2024-07-22 13:07:42.597430] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:45:23.265 [2024-07-22 13:07:42.597437] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:45:23.265 [2024-07-22 13:07:42.597442] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:45:23.265 [2024-07-22 13:07:42.597446] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1819380) on tqpair=0x17e26c0 00:45:23.265 [2024-07-22 13:07:42.597458] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:23.265 [2024-07-22 13:07:42.597464] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:23.265 [2024-07-22 13:07:42.597468] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17e26c0) 00:45:23.265 [2024-07-22 13:07:42.597476] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:23.265 [2024-07-22 13:07:42.597494] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1819380, cid 3, qid 0 00:45:23.265 [2024-07-22 13:07:42.597547] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:45:23.265 [2024-07-22 13:07:42.597554] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:45:23.265 [2024-07-22 13:07:42.597558] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:45:23.265 [2024-07-22 13:07:42.597563] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1819380) on tqpair=0x17e26c0 00:45:23.265 [2024-07-22 13:07:42.597589] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:23.265 [2024-07-22 13:07:42.597594] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:23.265 [2024-07-22 13:07:42.597599] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17e26c0) 00:45:23.265 [2024-07-22 13:07:42.597606] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:23.265 [2024-07-22 13:07:42.597624] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1819380, cid 3, qid 0 00:45:23.265 [2024-07-22 13:07:42.597674] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:45:23.265 [2024-07-22 13:07:42.597681] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:45:23.265 [2024-07-22 13:07:42.597685] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:45:23.265 [2024-07-22 13:07:42.597690] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1819380) on tqpair=0x17e26c0 00:45:23.265 [2024-07-22 13:07:42.597701] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:23.265 [2024-07-22 13:07:42.597706] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:23.265 [2024-07-22 13:07:42.597710] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17e26c0) 00:45:23.265 [2024-07-22 13:07:42.597718] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:23.265 [2024-07-22 13:07:42.597736] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1819380, cid 3, qid 0 00:45:23.265 [2024-07-22 13:07:42.597789] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:45:23.265 [2024-07-22 13:07:42.597796] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:45:23.265 [2024-07-22 13:07:42.597800] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:45:23.265 [2024-07-22 13:07:42.597805] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1819380) on tqpair=0x17e26c0 00:45:23.266 [2024-07-22 13:07:42.597816] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:23.266 [2024-07-22 13:07:42.597821] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:23.266 [2024-07-22 13:07:42.597825] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17e26c0) 00:45:23.266 [2024-07-22 13:07:42.597833] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:23.266 [2024-07-22 13:07:42.597851] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1819380, cid 3, qid 0 00:45:23.266 [2024-07-22 13:07:42.597902] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:45:23.266 [2024-07-22 13:07:42.597909] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:45:23.266 [2024-07-22 13:07:42.597913] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:45:23.266 [2024-07-22 13:07:42.597918] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1819380) on tqpair=0x17e26c0 00:45:23.266 [2024-07-22 13:07:42.597929] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:23.266 [2024-07-22 13:07:42.597934] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:23.266 [2024-07-22 13:07:42.597938] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17e26c0) 00:45:23.266 [2024-07-22 13:07:42.597945] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:23.266 [2024-07-22 13:07:42.597964] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1819380, cid 3, qid 0 00:45:23.266 [2024-07-22 13:07:42.598021] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:45:23.266 [2024-07-22 13:07:42.598028] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:45:23.266 [2024-07-22 13:07:42.598032] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:45:23.266 [2024-07-22 13:07:42.598036] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1819380) on tqpair=0x17e26c0 00:45:23.266 [2024-07-22 13:07:42.598048] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:23.266 [2024-07-22 13:07:42.598053] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:23.266 [2024-07-22 13:07:42.598057] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17e26c0) 00:45:23.266 [2024-07-22 13:07:42.598064] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:23.266 [2024-07-22 13:07:42.598082] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1819380, cid 3, qid 0 00:45:23.266 [2024-07-22 13:07:42.598132] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:45:23.266 [2024-07-22 13:07:42.598139] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:45:23.266 [2024-07-22 13:07:42.598143] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:45:23.266 [2024-07-22 13:07:42.598148] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1819380) on tqpair=0x17e26c0 00:45:23.266 [2024-07-22 13:07:42.598159] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:45:23.266 [2024-07-22 13:07:42.598164] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:45:23.266 [2024-07-22 13:07:42.602199] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17e26c0) 00:45:23.266 [2024-07-22 13:07:42.602213] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:23.266 [2024-07-22 13:07:42.602245] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1819380, cid 3, qid 0 00:45:23.266 [2024-07-22 13:07:42.602309] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:45:23.266 [2024-07-22 13:07:42.602317] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:45:23.266 [2024-07-22 13:07:42.602321] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:45:23.266 [2024-07-22 13:07:42.602326] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1819380) on tqpair=0x17e26c0 00:45:23.266 [2024-07-22 13:07:42.602337] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:45:23.266 0 Kelvin (-273 Celsius) 00:45:23.266 Available Spare: 0% 00:45:23.266 Available Spare Threshold: 0% 00:45:23.266 Life Percentage Used: 0% 00:45:23.266 Data Units Read: 0 00:45:23.266 Data Units Written: 0 00:45:23.266 Host Read Commands: 0 00:45:23.266 Host Write Commands: 0 00:45:23.266 Controller Busy Time: 0 minutes 00:45:23.266 Power Cycles: 0 00:45:23.266 Power On Hours: 0 hours 00:45:23.266 Unsafe Shutdowns: 0 00:45:23.266 Unrecoverable Media Errors: 0 00:45:23.266 Lifetime Error Log Entries: 0 00:45:23.266 Warning Temperature Time: 0 minutes 00:45:23.266 Critical Temperature Time: 0 minutes 00:45:23.266 00:45:23.266 Number of Queues 00:45:23.266 ================ 00:45:23.266 Number of I/O Submission Queues: 127 00:45:23.266 Number of I/O Completion Queues: 127 00:45:23.266 00:45:23.266 Active Namespaces 00:45:23.266 ================= 00:45:23.266 Namespace ID:1 00:45:23.266 Error Recovery Timeout: Unlimited 00:45:23.266 Command Set Identifier: NVM (00h) 00:45:23.266 Deallocate: Supported 00:45:23.266 Deallocated/Unwritten Error: Not Supported 00:45:23.266 Deallocated Read Value: Unknown 00:45:23.266 Deallocate in Write Zeroes: Not Supported 00:45:23.266 Deallocated Guard Field: 0xFFFF 00:45:23.266 Flush: Supported 00:45:23.266 Reservation: Supported 00:45:23.266 Namespace Sharing Capabilities: Multiple Controllers 00:45:23.266 Size (in LBAs): 131072 (0GiB) 00:45:23.266 Capacity (in LBAs): 131072 (0GiB) 00:45:23.266 Utilization (in LBAs): 131072 (0GiB) 00:45:23.266 NGUID: ABCDEF0123456789ABCDEF0123456789 00:45:23.266 EUI64: ABCDEF0123456789 00:45:23.266 UUID: cfd6ad5a-4a5e-47ea-a9ea-51420b744d38 00:45:23.266 Thin Provisioning: Not Supported 00:45:23.266 Per-NS Atomic Units: Yes 00:45:23.266 Atomic Boundary Size (Normal): 0 00:45:23.266 Atomic Boundary Size (PFail): 0 00:45:23.266 Atomic Boundary Offset: 0 00:45:23.266 Maximum Single Source Range Length: 65535 00:45:23.266 Maximum Copy Length: 65535 00:45:23.266 Maximum Source Range Count: 1 00:45:23.266 NGUID/EUI64 Never Reused: No 00:45:23.266 Namespace Write Protected: No 00:45:23.266 Number of LBA Formats: 1 00:45:23.266 Current LBA Format: LBA Format #00 00:45:23.266 LBA Format #00: Data Size: 512 Metadata Size: 0 00:45:23.266 00:45:23.266 13:07:42 -- host/identify.sh@51 -- # sync 00:45:23.266 13:07:42 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:45:23.266 13:07:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:45:23.266 13:07:42 -- common/autotest_common.sh@10 -- # set +x 00:45:23.266 13:07:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:45:23.266 13:07:42 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:45:23.266 13:07:42 -- host/identify.sh@56 -- # nvmftestfini 00:45:23.266 13:07:42 -- nvmf/common.sh@476 -- # nvmfcleanup 00:45:23.266 13:07:42 -- nvmf/common.sh@116 -- # sync 00:45:23.525 13:07:42 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:45:23.525 13:07:42 -- nvmf/common.sh@119 -- # set +e 00:45:23.525 13:07:42 -- nvmf/common.sh@120 -- # for i in {1..20} 00:45:23.525 13:07:42 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:45:23.525 rmmod nvme_tcp 00:45:23.525 rmmod nvme_fabrics 00:45:23.525 rmmod nvme_keyring 00:45:23.525 13:07:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:45:23.525 13:07:42 -- nvmf/common.sh@123 -- # set -e 00:45:23.525 13:07:42 -- nvmf/common.sh@124 -- # return 0 00:45:23.525 13:07:42 -- nvmf/common.sh@477 -- # '[' -n 92744 ']' 00:45:23.525 13:07:42 -- nvmf/common.sh@478 -- # killprocess 92744 00:45:23.525 13:07:42 -- common/autotest_common.sh@926 -- # '[' -z 92744 ']' 00:45:23.525 13:07:42 -- common/autotest_common.sh@930 -- # kill -0 92744 00:45:23.525 13:07:42 -- common/autotest_common.sh@931 -- # uname 00:45:23.525 13:07:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:45:23.525 13:07:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 92744 00:45:23.525 13:07:42 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:45:23.525 13:07:42 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:45:23.525 killing process with pid 92744 00:45:23.525 13:07:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 92744' 00:45:23.525 13:07:42 -- common/autotest_common.sh@945 -- # kill 92744 00:45:23.525 [2024-07-22 13:07:42.759450] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:45:23.525 13:07:42 -- common/autotest_common.sh@950 -- # wait 92744 00:45:23.784 13:07:42 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:45:23.784 13:07:42 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:45:23.784 13:07:42 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:45:23.784 13:07:42 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:45:23.784 13:07:42 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:45:23.784 13:07:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:23.784 13:07:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:45:23.784 13:07:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:23.784 13:07:43 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:45:23.784 00:45:23.784 real 0m2.538s 00:45:23.784 user 0m7.306s 00:45:23.784 sys 0m0.644s 00:45:23.784 13:07:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:45:23.784 13:07:43 -- common/autotest_common.sh@10 -- # set +x 00:45:23.784 ************************************ 00:45:23.784 END TEST nvmf_identify 00:45:23.784 ************************************ 00:45:23.784 13:07:43 -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:45:23.784 13:07:43 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:45:23.784 13:07:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:45:23.784 13:07:43 -- common/autotest_common.sh@10 -- # set +x 00:45:23.784 ************************************ 00:45:23.784 START TEST nvmf_perf 00:45:23.784 ************************************ 00:45:23.784 13:07:43 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:45:23.784 * Looking for test storage... 00:45:23.784 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:45:23.784 13:07:43 -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:45:23.784 13:07:43 -- nvmf/common.sh@7 -- # uname -s 00:45:23.784 13:07:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:45:23.784 13:07:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:45:23.784 13:07:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:45:23.784 13:07:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:45:23.784 13:07:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:45:23.784 13:07:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:45:23.784 13:07:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:45:23.784 13:07:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:45:23.784 13:07:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:45:23.784 13:07:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:45:23.784 13:07:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:864ae4a3-cd96-439b-8ef2-6dab4d992115 00:45:23.784 13:07:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=864ae4a3-cd96-439b-8ef2-6dab4d992115 00:45:23.784 13:07:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:45:23.784 13:07:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:45:23.784 13:07:43 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:45:23.784 13:07:43 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:45:23.784 13:07:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:23.785 13:07:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:23.785 13:07:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:23.785 13:07:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:23.785 13:07:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:23.785 13:07:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:23.785 13:07:43 -- paths/export.sh@5 -- # export PATH 00:45:23.785 13:07:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:23.785 13:07:43 -- nvmf/common.sh@46 -- # : 0 00:45:23.785 13:07:43 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:45:23.785 13:07:43 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:45:23.785 13:07:43 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:45:23.785 13:07:43 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:45:23.785 13:07:43 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:45:23.785 13:07:43 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:45:23.785 13:07:43 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:45:23.785 13:07:43 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:45:23.785 13:07:43 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:45:23.785 13:07:43 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:45:23.785 13:07:43 -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:45:23.785 13:07:43 -- host/perf.sh@17 -- # nvmftestinit 00:45:23.785 13:07:43 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:45:23.785 13:07:43 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:45:23.785 13:07:43 -- nvmf/common.sh@436 -- # prepare_net_devs 00:45:23.785 13:07:43 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:45:23.785 13:07:43 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:45:23.785 13:07:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:23.785 13:07:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:45:23.785 13:07:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:23.785 13:07:43 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:45:23.785 13:07:43 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:45:23.785 13:07:43 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:45:23.785 13:07:43 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:45:23.785 13:07:43 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:45:23.785 13:07:43 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:45:23.785 13:07:43 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:45:23.785 13:07:43 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:45:23.785 13:07:43 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:45:23.785 13:07:43 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:45:23.785 13:07:43 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:45:23.785 13:07:43 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:45:23.785 13:07:43 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:45:23.785 13:07:43 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:45:23.785 13:07:43 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:45:23.785 13:07:43 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:45:23.785 13:07:43 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:45:23.785 13:07:43 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:45:23.785 13:07:43 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:45:23.785 13:07:43 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:45:23.785 Cannot find device "nvmf_tgt_br" 00:45:23.785 13:07:43 -- nvmf/common.sh@154 -- # true 00:45:23.785 13:07:43 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:45:24.044 Cannot find device "nvmf_tgt_br2" 00:45:24.044 13:07:43 -- nvmf/common.sh@155 -- # true 00:45:24.044 13:07:43 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:45:24.044 13:07:43 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:45:24.044 Cannot find device "nvmf_tgt_br" 00:45:24.044 13:07:43 -- nvmf/common.sh@157 -- # true 00:45:24.044 13:07:43 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:45:24.044 Cannot find device "nvmf_tgt_br2" 00:45:24.044 13:07:43 -- nvmf/common.sh@158 -- # true 00:45:24.044 13:07:43 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:45:24.044 13:07:43 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:45:24.044 13:07:43 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:45:24.044 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:45:24.044 13:07:43 -- nvmf/common.sh@161 -- # true 00:45:24.044 13:07:43 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:45:24.044 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:45:24.044 13:07:43 -- nvmf/common.sh@162 -- # true 00:45:24.044 13:07:43 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:45:24.044 13:07:43 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:45:24.044 13:07:43 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:45:24.044 13:07:43 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:45:24.044 13:07:43 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:45:24.044 13:07:43 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:45:24.044 13:07:43 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:45:24.044 13:07:43 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:45:24.044 13:07:43 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:45:24.044 13:07:43 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:45:24.044 13:07:43 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:45:24.044 13:07:43 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:45:24.044 13:07:43 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:45:24.044 13:07:43 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:45:24.044 13:07:43 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:45:24.044 13:07:43 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:45:24.044 13:07:43 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:45:24.044 13:07:43 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:45:24.044 13:07:43 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:45:24.044 13:07:43 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:45:24.044 13:07:43 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:45:24.044 13:07:43 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:45:24.044 13:07:43 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:45:24.044 13:07:43 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:45:24.303 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:45:24.303 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:45:24.303 00:45:24.303 --- 10.0.0.2 ping statistics --- 00:45:24.303 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:24.303 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:45:24.303 13:07:43 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:45:24.303 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:45:24.303 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:45:24.303 00:45:24.303 --- 10.0.0.3 ping statistics --- 00:45:24.303 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:24.303 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:45:24.303 13:07:43 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:45:24.303 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:45:24.303 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:45:24.303 00:45:24.303 --- 10.0.0.1 ping statistics --- 00:45:24.303 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:24.303 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:45:24.303 13:07:43 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:45:24.303 13:07:43 -- nvmf/common.sh@421 -- # return 0 00:45:24.303 13:07:43 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:45:24.303 13:07:43 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:45:24.303 13:07:43 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:45:24.303 13:07:43 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:45:24.303 13:07:43 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:45:24.303 13:07:43 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:45:24.303 13:07:43 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:45:24.303 13:07:43 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:45:24.303 13:07:43 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:45:24.303 13:07:43 -- common/autotest_common.sh@712 -- # xtrace_disable 00:45:24.303 13:07:43 -- common/autotest_common.sh@10 -- # set +x 00:45:24.303 13:07:43 -- nvmf/common.sh@469 -- # nvmfpid=92972 00:45:24.303 13:07:43 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:45:24.303 13:07:43 -- nvmf/common.sh@470 -- # waitforlisten 92972 00:45:24.303 13:07:43 -- common/autotest_common.sh@819 -- # '[' -z 92972 ']' 00:45:24.303 13:07:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:24.303 13:07:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:45:24.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:24.303 13:07:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:24.303 13:07:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:45:24.303 13:07:43 -- common/autotest_common.sh@10 -- # set +x 00:45:24.303 [2024-07-22 13:07:43.544390] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:45:24.303 [2024-07-22 13:07:43.544462] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:45:24.303 [2024-07-22 13:07:43.673996] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:45:24.562 [2024-07-22 13:07:43.743785] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:45:24.562 [2024-07-22 13:07:43.743921] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:45:24.562 [2024-07-22 13:07:43.743933] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:45:24.562 [2024-07-22 13:07:43.743941] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:45:24.562 [2024-07-22 13:07:43.744087] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:45:24.562 [2024-07-22 13:07:43.744238] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:45:24.562 [2024-07-22 13:07:43.744296] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:45:24.562 [2024-07-22 13:07:43.744301] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:45:25.499 13:07:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:45:25.499 13:07:44 -- common/autotest_common.sh@852 -- # return 0 00:45:25.499 13:07:44 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:45:25.499 13:07:44 -- common/autotest_common.sh@718 -- # xtrace_disable 00:45:25.499 13:07:44 -- common/autotest_common.sh@10 -- # set +x 00:45:25.499 13:07:44 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:45:25.499 13:07:44 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:45:25.499 13:07:44 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:45:25.758 13:07:45 -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:45:25.758 13:07:45 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:45:26.017 13:07:45 -- host/perf.sh@30 -- # local_nvme_trid=0000:00:06.0 00:45:26.017 13:07:45 -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:45:26.275 13:07:45 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:45:26.275 13:07:45 -- host/perf.sh@33 -- # '[' -n 0000:00:06.0 ']' 00:45:26.275 13:07:45 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:45:26.275 13:07:45 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:45:26.276 13:07:45 -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:45:26.534 [2024-07-22 13:07:45.802469] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:45:26.534 13:07:45 -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:45:26.793 13:07:46 -- host/perf.sh@45 -- # for bdev in $bdevs 00:45:26.793 13:07:46 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:45:27.051 13:07:46 -- host/perf.sh@45 -- # for bdev in $bdevs 00:45:27.051 13:07:46 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:45:27.310 13:07:46 -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:45:27.568 [2024-07-22 13:07:46.827853] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:27.568 13:07:46 -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:45:27.826 13:07:47 -- host/perf.sh@52 -- # '[' -n 0000:00:06.0 ']' 00:45:27.826 13:07:47 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:45:27.826 13:07:47 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:45:27.826 13:07:47 -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:45:28.763 Initializing NVMe Controllers 00:45:28.763 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:45:28.763 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:45:28.763 Initialization complete. Launching workers. 00:45:28.763 ======================================================== 00:45:28.763 Latency(us) 00:45:28.763 Device Information : IOPS MiB/s Average min max 00:45:28.763 PCIE (0000:00:06.0) NSID 1 from core 0: 22144.00 86.50 1444.69 417.64 7815.37 00:45:28.763 ======================================================== 00:45:28.763 Total : 22144.00 86.50 1444.69 417.64 7815.37 00:45:28.763 00:45:28.763 13:07:48 -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:45:30.140 Initializing NVMe Controllers 00:45:30.140 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:45:30.140 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:45:30.140 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:45:30.140 Initialization complete. Launching workers. 00:45:30.140 ======================================================== 00:45:30.140 Latency(us) 00:45:30.140 Device Information : IOPS MiB/s Average min max 00:45:30.140 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3894.94 15.21 255.44 103.05 6115.29 00:45:30.140 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 123.00 0.48 8185.66 5983.76 12078.67 00:45:30.140 ======================================================== 00:45:30.140 Total : 4017.94 15.70 498.21 103.05 12078.67 00:45:30.140 00:45:30.140 13:07:49 -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:45:31.518 Initializing NVMe Controllers 00:45:31.518 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:45:31.518 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:45:31.518 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:45:31.518 Initialization complete. Launching workers. 00:45:31.518 ======================================================== 00:45:31.518 Latency(us) 00:45:31.518 Device Information : IOPS MiB/s Average min max 00:45:31.518 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10116.00 39.52 3163.76 585.77 7656.69 00:45:31.518 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2704.00 10.56 11924.77 5841.17 22839.05 00:45:31.518 ======================================================== 00:45:31.518 Total : 12820.00 50.08 5011.64 585.77 22839.05 00:45:31.518 00:45:31.518 13:07:50 -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:45:31.518 13:07:50 -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:45:34.054 Initializing NVMe Controllers 00:45:34.054 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:45:34.054 Controller IO queue size 128, less than required. 00:45:34.054 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:45:34.054 Controller IO queue size 128, less than required. 00:45:34.054 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:45:34.054 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:45:34.054 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:45:34.054 Initialization complete. Launching workers. 00:45:34.054 ======================================================== 00:45:34.054 Latency(us) 00:45:34.054 Device Information : IOPS MiB/s Average min max 00:45:34.054 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1753.80 438.45 73679.69 47642.23 124988.11 00:45:34.054 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 601.56 150.39 222231.44 93553.98 347390.30 00:45:34.054 ======================================================== 00:45:34.054 Total : 2355.36 588.84 111619.85 47642.23 347390.30 00:45:34.054 00:45:34.054 13:07:53 -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:45:34.312 No valid NVMe controllers or AIO or URING devices found 00:45:34.312 Initializing NVMe Controllers 00:45:34.312 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:45:34.312 Controller IO queue size 128, less than required. 00:45:34.312 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:45:34.312 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:45:34.312 Controller IO queue size 128, less than required. 00:45:34.312 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:45:34.312 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:45:34.312 WARNING: Some requested NVMe devices were skipped 00:45:34.312 13:07:53 -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:45:36.854 Initializing NVMe Controllers 00:45:36.854 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:45:36.854 Controller IO queue size 128, less than required. 00:45:36.854 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:45:36.854 Controller IO queue size 128, less than required. 00:45:36.854 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:45:36.854 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:45:36.854 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:45:36.854 Initialization complete. Launching workers. 00:45:36.854 00:45:36.854 ==================== 00:45:36.854 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:45:36.854 TCP transport: 00:45:36.854 polls: 11921 00:45:36.854 idle_polls: 8517 00:45:36.854 sock_completions: 3404 00:45:36.854 nvme_completions: 4372 00:45:36.854 submitted_requests: 6700 00:45:36.854 queued_requests: 1 00:45:36.854 00:45:36.854 ==================== 00:45:36.854 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:45:36.854 TCP transport: 00:45:36.854 polls: 12073 00:45:36.854 idle_polls: 8818 00:45:36.854 sock_completions: 3255 00:45:36.854 nvme_completions: 6546 00:45:36.854 submitted_requests: 10016 00:45:36.854 queued_requests: 1 00:45:36.854 ======================================================== 00:45:36.854 Latency(us) 00:45:36.854 Device Information : IOPS MiB/s Average min max 00:45:36.854 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1156.44 289.11 114236.59 85464.88 191958.84 00:45:36.854 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1699.91 424.98 75563.73 33923.47 121843.06 00:45:36.854 ======================================================== 00:45:36.854 Total : 2856.35 714.09 91221.06 33923.47 191958.84 00:45:36.854 00:45:36.854 13:07:56 -- host/perf.sh@66 -- # sync 00:45:36.854 13:07:56 -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:45:37.113 13:07:56 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:45:37.113 13:07:56 -- host/perf.sh@71 -- # '[' -n 0000:00:06.0 ']' 00:45:37.113 13:07:56 -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:45:37.372 13:07:56 -- host/perf.sh@72 -- # ls_guid=1a15c93f-919b-4d6a-9115-1b75a1967e2c 00:45:37.372 13:07:56 -- host/perf.sh@73 -- # get_lvs_free_mb 1a15c93f-919b-4d6a-9115-1b75a1967e2c 00:45:37.372 13:07:56 -- common/autotest_common.sh@1343 -- # local lvs_uuid=1a15c93f-919b-4d6a-9115-1b75a1967e2c 00:45:37.372 13:07:56 -- common/autotest_common.sh@1344 -- # local lvs_info 00:45:37.372 13:07:56 -- common/autotest_common.sh@1345 -- # local fc 00:45:37.372 13:07:56 -- common/autotest_common.sh@1346 -- # local cs 00:45:37.372 13:07:56 -- common/autotest_common.sh@1347 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:45:37.633 13:07:56 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:45:37.633 { 00:45:37.633 "base_bdev": "Nvme0n1", 00:45:37.633 "block_size": 4096, 00:45:37.633 "cluster_size": 4194304, 00:45:37.633 "free_clusters": 1278, 00:45:37.633 "name": "lvs_0", 00:45:37.633 "total_data_clusters": 1278, 00:45:37.633 "uuid": "1a15c93f-919b-4d6a-9115-1b75a1967e2c" 00:45:37.633 } 00:45:37.633 ]' 00:45:37.633 13:07:56 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="1a15c93f-919b-4d6a-9115-1b75a1967e2c") .free_clusters' 00:45:37.633 13:07:56 -- common/autotest_common.sh@1348 -- # fc=1278 00:45:37.633 13:07:56 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="1a15c93f-919b-4d6a-9115-1b75a1967e2c") .cluster_size' 00:45:37.633 13:07:57 -- common/autotest_common.sh@1349 -- # cs=4194304 00:45:37.633 13:07:57 -- common/autotest_common.sh@1352 -- # free_mb=5112 00:45:37.633 5112 00:45:37.633 13:07:57 -- common/autotest_common.sh@1353 -- # echo 5112 00:45:37.633 13:07:57 -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:45:37.633 13:07:57 -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 1a15c93f-919b-4d6a-9115-1b75a1967e2c lbd_0 5112 00:45:37.892 13:07:57 -- host/perf.sh@80 -- # lb_guid=0f291b44-2595-4def-a4e6-d4491d94d20b 00:45:37.892 13:07:57 -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore 0f291b44-2595-4def-a4e6-d4491d94d20b lvs_n_0 00:45:38.459 13:07:57 -- host/perf.sh@83 -- # ls_nested_guid=a0d72583-291e-4367-a0e6-73eebf5286b3 00:45:38.459 13:07:57 -- host/perf.sh@84 -- # get_lvs_free_mb a0d72583-291e-4367-a0e6-73eebf5286b3 00:45:38.459 13:07:57 -- common/autotest_common.sh@1343 -- # local lvs_uuid=a0d72583-291e-4367-a0e6-73eebf5286b3 00:45:38.459 13:07:57 -- common/autotest_common.sh@1344 -- # local lvs_info 00:45:38.459 13:07:57 -- common/autotest_common.sh@1345 -- # local fc 00:45:38.459 13:07:57 -- common/autotest_common.sh@1346 -- # local cs 00:45:38.459 13:07:57 -- common/autotest_common.sh@1347 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:45:38.459 13:07:57 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:45:38.459 { 00:45:38.459 "base_bdev": "Nvme0n1", 00:45:38.459 "block_size": 4096, 00:45:38.459 "cluster_size": 4194304, 00:45:38.459 "free_clusters": 0, 00:45:38.459 "name": "lvs_0", 00:45:38.459 "total_data_clusters": 1278, 00:45:38.459 "uuid": "1a15c93f-919b-4d6a-9115-1b75a1967e2c" 00:45:38.459 }, 00:45:38.459 { 00:45:38.459 "base_bdev": "0f291b44-2595-4def-a4e6-d4491d94d20b", 00:45:38.459 "block_size": 4096, 00:45:38.459 "cluster_size": 4194304, 00:45:38.459 "free_clusters": 1276, 00:45:38.459 "name": "lvs_n_0", 00:45:38.459 "total_data_clusters": 1276, 00:45:38.459 "uuid": "a0d72583-291e-4367-a0e6-73eebf5286b3" 00:45:38.459 } 00:45:38.459 ]' 00:45:38.459 13:07:57 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="a0d72583-291e-4367-a0e6-73eebf5286b3") .free_clusters' 00:45:38.459 13:07:57 -- common/autotest_common.sh@1348 -- # fc=1276 00:45:38.459 13:07:57 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="a0d72583-291e-4367-a0e6-73eebf5286b3") .cluster_size' 00:45:38.718 13:07:57 -- common/autotest_common.sh@1349 -- # cs=4194304 00:45:38.718 13:07:57 -- common/autotest_common.sh@1352 -- # free_mb=5104 00:45:38.718 5104 00:45:38.718 13:07:57 -- common/autotest_common.sh@1353 -- # echo 5104 00:45:38.718 13:07:57 -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:45:38.718 13:07:57 -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u a0d72583-291e-4367-a0e6-73eebf5286b3 lbd_nest_0 5104 00:45:38.718 13:07:58 -- host/perf.sh@88 -- # lb_nested_guid=9c3f9768-f8f1-4108-b4bf-3b0aa9eb1169 00:45:38.718 13:07:58 -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:45:38.977 13:07:58 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:45:38.977 13:07:58 -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 9c3f9768-f8f1-4108-b4bf-3b0aa9eb1169 00:45:39.235 13:07:58 -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:45:39.494 13:07:58 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:45:39.494 13:07:58 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:45:39.494 13:07:58 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:45:39.494 13:07:58 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:45:39.494 13:07:58 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:45:39.752 No valid NVMe controllers or AIO or URING devices found 00:45:39.752 Initializing NVMe Controllers 00:45:39.752 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:45:39.752 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:45:39.752 WARNING: Some requested NVMe devices were skipped 00:45:39.752 13:07:59 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:45:39.752 13:07:59 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:45:51.980 Initializing NVMe Controllers 00:45:51.980 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:45:51.980 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:45:51.980 Initialization complete. Launching workers. 00:45:51.980 ======================================================== 00:45:51.980 Latency(us) 00:45:51.980 Device Information : IOPS MiB/s Average min max 00:45:51.980 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 940.80 117.60 1062.55 326.96 8423.12 00:45:51.980 ======================================================== 00:45:51.980 Total : 940.80 117.60 1062.55 326.96 8423.12 00:45:51.980 00:45:51.980 13:08:09 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:45:51.980 13:08:09 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:45:51.980 13:08:09 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:45:51.980 No valid NVMe controllers or AIO or URING devices found 00:45:51.980 Initializing NVMe Controllers 00:45:51.980 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:45:51.980 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:45:51.980 WARNING: Some requested NVMe devices were skipped 00:45:51.980 13:08:09 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:45:51.980 13:08:09 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:46:01.951 [2024-07-22 13:08:19.826170] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152fc10 is same with the state(5) to be set 00:46:01.951 [2024-07-22 13:08:19.826242] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152fc10 is same with the state(5) to be set 00:46:01.951 [2024-07-22 13:08:19.826270] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152fc10 is same with the state(5) to be set 00:46:01.951 Initializing NVMe Controllers 00:46:01.951 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:46:01.951 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:46:01.951 Initialization complete. Launching workers. 00:46:01.951 ======================================================== 00:46:01.951 Latency(us) 00:46:01.951 Device Information : IOPS MiB/s Average min max 00:46:01.951 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1152.79 144.10 27798.79 7973.67 86599.86 00:46:01.951 ======================================================== 00:46:01.951 Total : 1152.79 144.10 27798.79 7973.67 86599.86 00:46:01.951 00:46:01.951 13:08:19 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:46:01.951 13:08:19 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:46:01.951 13:08:19 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:46:01.951 No valid NVMe controllers or AIO or URING devices found 00:46:01.951 Initializing NVMe Controllers 00:46:01.951 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:46:01.951 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:46:01.951 WARNING: Some requested NVMe devices were skipped 00:46:01.951 13:08:20 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:46:01.951 13:08:20 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:46:11.928 Initializing NVMe Controllers 00:46:11.928 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:46:11.928 Controller IO queue size 128, less than required. 00:46:11.928 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:46:11.928 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:46:11.928 Initialization complete. Launching workers. 00:46:11.928 ======================================================== 00:46:11.928 Latency(us) 00:46:11.928 Device Information : IOPS MiB/s Average min max 00:46:11.928 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4113.98 514.25 31139.81 9491.32 69067.24 00:46:11.928 ======================================================== 00:46:11.928 Total : 4113.98 514.25 31139.81 9491.32 69067.24 00:46:11.928 00:46:11.928 13:08:30 -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:46:11.928 13:08:30 -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 9c3f9768-f8f1-4108-b4bf-3b0aa9eb1169 00:46:11.928 13:08:31 -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:46:12.186 13:08:31 -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 0f291b44-2595-4def-a4e6-d4491d94d20b 00:46:12.186 13:08:31 -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:46:12.443 13:08:31 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:46:12.443 13:08:31 -- host/perf.sh@114 -- # nvmftestfini 00:46:12.443 13:08:31 -- nvmf/common.sh@476 -- # nvmfcleanup 00:46:12.443 13:08:31 -- nvmf/common.sh@116 -- # sync 00:46:12.443 13:08:31 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:46:12.443 13:08:31 -- nvmf/common.sh@119 -- # set +e 00:46:12.443 13:08:31 -- nvmf/common.sh@120 -- # for i in {1..20} 00:46:12.443 13:08:31 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:46:12.443 rmmod nvme_tcp 00:46:12.443 rmmod nvme_fabrics 00:46:12.443 rmmod nvme_keyring 00:46:12.443 13:08:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:46:12.443 13:08:31 -- nvmf/common.sh@123 -- # set -e 00:46:12.443 13:08:31 -- nvmf/common.sh@124 -- # return 0 00:46:12.443 13:08:31 -- nvmf/common.sh@477 -- # '[' -n 92972 ']' 00:46:12.443 13:08:31 -- nvmf/common.sh@478 -- # killprocess 92972 00:46:12.443 13:08:31 -- common/autotest_common.sh@926 -- # '[' -z 92972 ']' 00:46:12.443 13:08:31 -- common/autotest_common.sh@930 -- # kill -0 92972 00:46:12.443 13:08:31 -- common/autotest_common.sh@931 -- # uname 00:46:12.443 13:08:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:46:12.443 13:08:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 92972 00:46:12.703 killing process with pid 92972 00:46:12.703 13:08:31 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:46:12.703 13:08:31 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:46:12.703 13:08:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 92972' 00:46:12.703 13:08:31 -- common/autotest_common.sh@945 -- # kill 92972 00:46:12.703 13:08:31 -- common/autotest_common.sh@950 -- # wait 92972 00:46:14.077 13:08:33 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:46:14.077 13:08:33 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:46:14.077 13:08:33 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:46:14.077 13:08:33 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:46:14.077 13:08:33 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:46:14.077 13:08:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:46:14.077 13:08:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:46:14.077 13:08:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:46:14.077 13:08:33 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:46:14.077 00:46:14.077 real 0m50.359s 00:46:14.077 user 3m10.925s 00:46:14.077 sys 0m10.490s 00:46:14.077 13:08:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:46:14.077 ************************************ 00:46:14.077 END TEST nvmf_perf 00:46:14.077 ************************************ 00:46:14.077 13:08:33 -- common/autotest_common.sh@10 -- # set +x 00:46:14.077 13:08:33 -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:46:14.077 13:08:33 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:46:14.077 13:08:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:46:14.077 13:08:33 -- common/autotest_common.sh@10 -- # set +x 00:46:14.077 ************************************ 00:46:14.077 START TEST nvmf_fio_host 00:46:14.077 ************************************ 00:46:14.077 13:08:33 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:46:14.337 * Looking for test storage... 00:46:14.337 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:46:14.337 13:08:33 -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:46:14.337 13:08:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:46:14.337 13:08:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:46:14.337 13:08:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:46:14.337 13:08:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:14.337 13:08:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:14.337 13:08:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:14.337 13:08:33 -- paths/export.sh@5 -- # export PATH 00:46:14.337 13:08:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:14.337 13:08:33 -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:46:14.337 13:08:33 -- nvmf/common.sh@7 -- # uname -s 00:46:14.337 13:08:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:46:14.337 13:08:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:46:14.337 13:08:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:46:14.337 13:08:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:46:14.337 13:08:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:46:14.337 13:08:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:46:14.337 13:08:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:46:14.337 13:08:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:46:14.337 13:08:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:46:14.337 13:08:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:46:14.337 13:08:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:864ae4a3-cd96-439b-8ef2-6dab4d992115 00:46:14.337 13:08:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=864ae4a3-cd96-439b-8ef2-6dab4d992115 00:46:14.337 13:08:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:46:14.337 13:08:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:46:14.337 13:08:33 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:46:14.337 13:08:33 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:46:14.337 13:08:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:46:14.337 13:08:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:46:14.337 13:08:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:46:14.337 13:08:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:14.337 13:08:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:14.338 13:08:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:14.338 13:08:33 -- paths/export.sh@5 -- # export PATH 00:46:14.338 13:08:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:14.338 13:08:33 -- nvmf/common.sh@46 -- # : 0 00:46:14.338 13:08:33 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:46:14.338 13:08:33 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:46:14.338 13:08:33 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:46:14.338 13:08:33 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:46:14.338 13:08:33 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:46:14.338 13:08:33 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:46:14.338 13:08:33 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:46:14.338 13:08:33 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:46:14.338 13:08:33 -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:46:14.338 13:08:33 -- host/fio.sh@14 -- # nvmftestinit 00:46:14.338 13:08:33 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:46:14.338 13:08:33 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:46:14.338 13:08:33 -- nvmf/common.sh@436 -- # prepare_net_devs 00:46:14.338 13:08:33 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:46:14.338 13:08:33 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:46:14.338 13:08:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:46:14.338 13:08:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:46:14.338 13:08:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:46:14.338 13:08:33 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:46:14.338 13:08:33 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:46:14.338 13:08:33 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:46:14.338 13:08:33 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:46:14.338 13:08:33 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:46:14.338 13:08:33 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:46:14.338 13:08:33 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:46:14.338 13:08:33 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:46:14.338 13:08:33 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:46:14.338 13:08:33 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:46:14.338 13:08:33 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:46:14.338 13:08:33 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:46:14.338 13:08:33 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:46:14.338 13:08:33 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:46:14.338 13:08:33 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:46:14.338 13:08:33 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:46:14.338 13:08:33 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:46:14.338 13:08:33 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:46:14.338 13:08:33 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:46:14.338 13:08:33 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:46:14.338 Cannot find device "nvmf_tgt_br" 00:46:14.338 13:08:33 -- nvmf/common.sh@154 -- # true 00:46:14.338 13:08:33 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:46:14.338 Cannot find device "nvmf_tgt_br2" 00:46:14.338 13:08:33 -- nvmf/common.sh@155 -- # true 00:46:14.338 13:08:33 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:46:14.338 13:08:33 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:46:14.338 Cannot find device "nvmf_tgt_br" 00:46:14.338 13:08:33 -- nvmf/common.sh@157 -- # true 00:46:14.338 13:08:33 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:46:14.338 Cannot find device "nvmf_tgt_br2" 00:46:14.338 13:08:33 -- nvmf/common.sh@158 -- # true 00:46:14.338 13:08:33 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:46:14.338 13:08:33 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:46:14.338 13:08:33 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:46:14.338 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:46:14.338 13:08:33 -- nvmf/common.sh@161 -- # true 00:46:14.338 13:08:33 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:46:14.338 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:46:14.338 13:08:33 -- nvmf/common.sh@162 -- # true 00:46:14.338 13:08:33 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:46:14.338 13:08:33 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:46:14.338 13:08:33 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:46:14.338 13:08:33 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:46:14.338 13:08:33 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:46:14.597 13:08:33 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:46:14.597 13:08:33 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:46:14.597 13:08:33 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:46:14.597 13:08:33 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:46:14.597 13:08:33 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:46:14.597 13:08:33 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:46:14.597 13:08:33 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:46:14.597 13:08:33 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:46:14.597 13:08:33 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:46:14.597 13:08:33 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:46:14.597 13:08:33 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:46:14.597 13:08:33 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:46:14.597 13:08:33 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:46:14.597 13:08:33 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:46:14.597 13:08:33 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:46:14.597 13:08:33 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:46:14.597 13:08:33 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:46:14.597 13:08:33 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:46:14.597 13:08:33 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:46:14.597 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:46:14.597 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:46:14.597 00:46:14.597 --- 10.0.0.2 ping statistics --- 00:46:14.597 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:14.597 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:46:14.597 13:08:33 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:46:14.597 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:46:14.597 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:46:14.597 00:46:14.597 --- 10.0.0.3 ping statistics --- 00:46:14.597 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:14.597 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:46:14.597 13:08:33 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:46:14.597 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:46:14.597 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:46:14.597 00:46:14.597 --- 10.0.0.1 ping statistics --- 00:46:14.597 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:14.597 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:46:14.597 13:08:33 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:46:14.597 13:08:33 -- nvmf/common.sh@421 -- # return 0 00:46:14.597 13:08:33 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:46:14.597 13:08:33 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:46:14.597 13:08:33 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:46:14.597 13:08:33 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:46:14.597 13:08:33 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:46:14.597 13:08:33 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:46:14.597 13:08:33 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:46:14.597 13:08:33 -- host/fio.sh@16 -- # [[ y != y ]] 00:46:14.597 13:08:33 -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:46:14.597 13:08:33 -- common/autotest_common.sh@712 -- # xtrace_disable 00:46:14.597 13:08:33 -- common/autotest_common.sh@10 -- # set +x 00:46:14.597 13:08:33 -- host/fio.sh@24 -- # nvmfpid=93939 00:46:14.597 13:08:33 -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:46:14.597 13:08:33 -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:46:14.597 13:08:33 -- host/fio.sh@28 -- # waitforlisten 93939 00:46:14.597 13:08:33 -- common/autotest_common.sh@819 -- # '[' -z 93939 ']' 00:46:14.597 13:08:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:14.597 13:08:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:46:14.597 13:08:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:14.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:14.597 13:08:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:46:14.597 13:08:33 -- common/autotest_common.sh@10 -- # set +x 00:46:14.597 [2024-07-22 13:08:33.979275] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:46:14.597 [2024-07-22 13:08:33.979340] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:46:14.855 [2024-07-22 13:08:34.115626] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:46:14.855 [2024-07-22 13:08:34.177690] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:46:14.855 [2024-07-22 13:08:34.177834] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:46:14.855 [2024-07-22 13:08:34.177858] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:46:14.855 [2024-07-22 13:08:34.177866] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:46:14.855 [2024-07-22 13:08:34.178030] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:46:14.855 [2024-07-22 13:08:34.178996] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:46:14.855 [2024-07-22 13:08:34.179123] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:46:14.855 [2024-07-22 13:08:34.179144] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:46:15.789 13:08:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:46:15.789 13:08:34 -- common/autotest_common.sh@852 -- # return 0 00:46:15.789 13:08:34 -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:46:15.789 [2024-07-22 13:08:35.080983] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:46:15.789 13:08:35 -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:46:15.789 13:08:35 -- common/autotest_common.sh@718 -- # xtrace_disable 00:46:15.789 13:08:35 -- common/autotest_common.sh@10 -- # set +x 00:46:15.789 13:08:35 -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:46:16.047 Malloc1 00:46:16.047 13:08:35 -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:46:16.305 13:08:35 -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:46:16.563 13:08:35 -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:46:16.820 [2024-07-22 13:08:36.079829] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:46:16.820 13:08:36 -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:46:17.078 13:08:36 -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:46:17.078 13:08:36 -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:46:17.078 13:08:36 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:46:17.078 13:08:36 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:46:17.078 13:08:36 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:46:17.078 13:08:36 -- common/autotest_common.sh@1318 -- # local sanitizers 00:46:17.078 13:08:36 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:46:17.078 13:08:36 -- common/autotest_common.sh@1320 -- # shift 00:46:17.078 13:08:36 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:46:17.078 13:08:36 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:46:17.078 13:08:36 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:46:17.078 13:08:36 -- common/autotest_common.sh@1324 -- # grep libasan 00:46:17.078 13:08:36 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:46:17.078 13:08:36 -- common/autotest_common.sh@1324 -- # asan_lib= 00:46:17.078 13:08:36 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:46:17.078 13:08:36 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:46:17.078 13:08:36 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:46:17.078 13:08:36 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:46:17.078 13:08:36 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:46:17.078 13:08:36 -- common/autotest_common.sh@1324 -- # asan_lib= 00:46:17.078 13:08:36 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:46:17.078 13:08:36 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:46:17.078 13:08:36 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:46:17.078 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:46:17.078 fio-3.35 00:46:17.078 Starting 1 thread 00:46:19.605 00:46:19.605 test: (groupid=0, jobs=1): err= 0: pid=94065: Mon Jul 22 13:08:38 2024 00:46:19.605 read: IOPS=10.3k, BW=40.4MiB/s (42.3MB/s)(81.0MiB/2006msec) 00:46:19.605 slat (nsec): min=1834, max=387835, avg=2449.04, stdev=3518.17 00:46:19.605 clat (usec): min=3231, max=12466, avg=6557.61, stdev=574.64 00:46:19.605 lat (usec): min=3291, max=12468, avg=6560.06, stdev=574.57 00:46:19.605 clat percentiles (usec): 00:46:19.605 | 1.00th=[ 5342], 5.00th=[ 5735], 10.00th=[ 5932], 20.00th=[ 6128], 00:46:19.605 | 30.00th=[ 6259], 40.00th=[ 6390], 50.00th=[ 6521], 60.00th=[ 6652], 00:46:19.605 | 70.00th=[ 6783], 80.00th=[ 6980], 90.00th=[ 7242], 95.00th=[ 7504], 00:46:19.605 | 99.00th=[ 8029], 99.50th=[ 8586], 99.90th=[10421], 99.95th=[10814], 00:46:19.605 | 99.99th=[11863] 00:46:19.605 bw ( KiB/s): min=40264, max=42000, per=99.94%, avg=41310.00, stdev=740.81, samples=4 00:46:19.605 iops : min=10066, max=10500, avg=10327.50, stdev=185.20, samples=4 00:46:19.605 write: IOPS=10.3k, BW=40.4MiB/s (42.4MB/s)(81.1MiB/2006msec); 0 zone resets 00:46:19.605 slat (nsec): min=1920, max=263852, avg=2536.16, stdev=2456.31 00:46:19.605 clat (usec): min=2493, max=10673, avg=5774.24, stdev=473.15 00:46:19.605 lat (usec): min=2507, max=10675, avg=5776.78, stdev=473.12 00:46:19.605 clat percentiles (usec): 00:46:19.605 | 1.00th=[ 4752], 5.00th=[ 5080], 10.00th=[ 5211], 20.00th=[ 5407], 00:46:19.605 | 30.00th=[ 5538], 40.00th=[ 5669], 50.00th=[ 5735], 60.00th=[ 5866], 00:46:19.605 | 70.00th=[ 5997], 80.00th=[ 6128], 90.00th=[ 6325], 95.00th=[ 6456], 00:46:19.605 | 99.00th=[ 6915], 99.50th=[ 7111], 99.90th=[ 8979], 99.95th=[10028], 00:46:19.605 | 99.99th=[10552] 00:46:19.605 bw ( KiB/s): min=40720, max=41848, per=100.00%, avg=41394.00, stdev=485.19, samples=4 00:46:19.605 iops : min=10180, max=10462, avg=10348.50, stdev=121.30, samples=4 00:46:19.605 lat (msec) : 4=0.10%, 10=99.80%, 20=0.10% 00:46:19.605 cpu : usr=63.59%, sys=26.53%, ctx=15, majf=0, minf=5 00:46:19.605 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:46:19.605 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:19.605 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:46:19.605 issued rwts: total=20729,20750,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:19.605 latency : target=0, window=0, percentile=100.00%, depth=128 00:46:19.605 00:46:19.605 Run status group 0 (all jobs): 00:46:19.605 READ: bw=40.4MiB/s (42.3MB/s), 40.4MiB/s-40.4MiB/s (42.3MB/s-42.3MB/s), io=81.0MiB (84.9MB), run=2006-2006msec 00:46:19.605 WRITE: bw=40.4MiB/s (42.4MB/s), 40.4MiB/s-40.4MiB/s (42.4MB/s-42.4MB/s), io=81.1MiB (85.0MB), run=2006-2006msec 00:46:19.605 13:08:38 -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:46:19.605 13:08:38 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:46:19.605 13:08:38 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:46:19.605 13:08:38 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:46:19.605 13:08:38 -- common/autotest_common.sh@1318 -- # local sanitizers 00:46:19.605 13:08:38 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:46:19.605 13:08:38 -- common/autotest_common.sh@1320 -- # shift 00:46:19.605 13:08:38 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:46:19.606 13:08:38 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:46:19.606 13:08:38 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:46:19.606 13:08:38 -- common/autotest_common.sh@1324 -- # grep libasan 00:46:19.606 13:08:38 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:46:19.606 13:08:38 -- common/autotest_common.sh@1324 -- # asan_lib= 00:46:19.606 13:08:38 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:46:19.606 13:08:38 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:46:19.606 13:08:38 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:46:19.606 13:08:38 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:46:19.606 13:08:38 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:46:19.606 13:08:38 -- common/autotest_common.sh@1324 -- # asan_lib= 00:46:19.606 13:08:38 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:46:19.606 13:08:38 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:46:19.606 13:08:38 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:46:19.606 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:46:19.606 fio-3.35 00:46:19.606 Starting 1 thread 00:46:22.166 00:46:22.166 test: (groupid=0, jobs=1): err= 0: pid=94108: Mon Jul 22 13:08:41 2024 00:46:22.166 read: IOPS=8680, BW=136MiB/s (142MB/s)(272MiB/2006msec) 00:46:22.166 slat (usec): min=2, max=458, avg= 3.80, stdev= 4.40 00:46:22.166 clat (usec): min=1815, max=17240, avg=8671.50, stdev=2155.05 00:46:22.166 lat (usec): min=1819, max=17244, avg=8675.31, stdev=2155.30 00:46:22.166 clat percentiles (usec): 00:46:22.166 | 1.00th=[ 4293], 5.00th=[ 5538], 10.00th=[ 5997], 20.00th=[ 6718], 00:46:22.166 | 30.00th=[ 7373], 40.00th=[ 7963], 50.00th=[ 8586], 60.00th=[ 9241], 00:46:22.166 | 70.00th=[ 9896], 80.00th=[10552], 90.00th=[11469], 95.00th=[12256], 00:46:22.166 | 99.00th=[14353], 99.50th=[14615], 99.90th=[16909], 99.95th=[17171], 00:46:22.166 | 99.99th=[17171] 00:46:22.166 bw ( KiB/s): min=63168, max=82560, per=51.73%, avg=71848.00, stdev=8007.05, samples=4 00:46:22.166 iops : min= 3948, max= 5160, avg=4490.50, stdev=500.44, samples=4 00:46:22.166 write: IOPS=5185, BW=81.0MiB/s (85.0MB/s)(146MiB/1804msec); 0 zone resets 00:46:22.166 slat (usec): min=31, max=353, avg=37.52, stdev=10.18 00:46:22.166 clat (usec): min=2712, max=17869, avg=10544.82, stdev=1835.54 00:46:22.166 lat (usec): min=2744, max=17947, avg=10582.34, stdev=1837.26 00:46:22.166 clat percentiles (usec): 00:46:22.166 | 1.00th=[ 6980], 5.00th=[ 7832], 10.00th=[ 8291], 20.00th=[ 8979], 00:46:22.166 | 30.00th=[ 9503], 40.00th=[10028], 50.00th=[10421], 60.00th=[10945], 00:46:22.166 | 70.00th=[11469], 80.00th=[11994], 90.00th=[12780], 95.00th=[13698], 00:46:22.166 | 99.00th=[15008], 99.50th=[16450], 99.90th=[17171], 99.95th=[17433], 00:46:22.166 | 99.99th=[17957] 00:46:22.166 bw ( KiB/s): min=66208, max=85984, per=90.20%, avg=74840.00, stdev=8255.02, samples=4 00:46:22.167 iops : min= 4138, max= 5374, avg=4677.50, stdev=515.94, samples=4 00:46:22.167 lat (msec) : 2=0.01%, 4=0.49%, 10=60.67%, 20=38.83% 00:46:22.167 cpu : usr=70.97%, sys=18.50%, ctx=19, majf=0, minf=1 00:46:22.167 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:46:22.167 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:22.167 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:46:22.167 issued rwts: total=17413,9355,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:22.167 latency : target=0, window=0, percentile=100.00%, depth=128 00:46:22.167 00:46:22.167 Run status group 0 (all jobs): 00:46:22.167 READ: bw=136MiB/s (142MB/s), 136MiB/s-136MiB/s (142MB/s-142MB/s), io=272MiB (285MB), run=2006-2006msec 00:46:22.167 WRITE: bw=81.0MiB/s (85.0MB/s), 81.0MiB/s-81.0MiB/s (85.0MB/s-85.0MB/s), io=146MiB (153MB), run=1804-1804msec 00:46:22.167 13:08:41 -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:46:22.167 13:08:41 -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:46:22.167 13:08:41 -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:46:22.167 13:08:41 -- host/fio.sh@51 -- # get_nvme_bdfs 00:46:22.167 13:08:41 -- common/autotest_common.sh@1498 -- # bdfs=() 00:46:22.167 13:08:41 -- common/autotest_common.sh@1498 -- # local bdfs 00:46:22.167 13:08:41 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:46:22.167 13:08:41 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:46:22.167 13:08:41 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:46:22.167 13:08:41 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:46:22.167 13:08:41 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:46:22.167 13:08:41 -- host/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 -i 10.0.0.2 00:46:22.425 Nvme0n1 00:46:22.682 13:08:41 -- host/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:46:22.940 13:08:42 -- host/fio.sh@53 -- # ls_guid=52b0c3d4-ada2-47eb-a68f-bb19db0cdf78 00:46:22.940 13:08:42 -- host/fio.sh@54 -- # get_lvs_free_mb 52b0c3d4-ada2-47eb-a68f-bb19db0cdf78 00:46:22.940 13:08:42 -- common/autotest_common.sh@1343 -- # local lvs_uuid=52b0c3d4-ada2-47eb-a68f-bb19db0cdf78 00:46:22.940 13:08:42 -- common/autotest_common.sh@1344 -- # local lvs_info 00:46:22.940 13:08:42 -- common/autotest_common.sh@1345 -- # local fc 00:46:22.940 13:08:42 -- common/autotest_common.sh@1346 -- # local cs 00:46:22.940 13:08:42 -- common/autotest_common.sh@1347 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:46:23.198 13:08:42 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:46:23.198 { 00:46:23.198 "base_bdev": "Nvme0n1", 00:46:23.198 "block_size": 4096, 00:46:23.198 "cluster_size": 1073741824, 00:46:23.198 "free_clusters": 4, 00:46:23.198 "name": "lvs_0", 00:46:23.198 "total_data_clusters": 4, 00:46:23.198 "uuid": "52b0c3d4-ada2-47eb-a68f-bb19db0cdf78" 00:46:23.198 } 00:46:23.198 ]' 00:46:23.198 13:08:42 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="52b0c3d4-ada2-47eb-a68f-bb19db0cdf78") .free_clusters' 00:46:23.198 13:08:42 -- common/autotest_common.sh@1348 -- # fc=4 00:46:23.198 13:08:42 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="52b0c3d4-ada2-47eb-a68f-bb19db0cdf78") .cluster_size' 00:46:23.198 13:08:42 -- common/autotest_common.sh@1349 -- # cs=1073741824 00:46:23.198 13:08:42 -- common/autotest_common.sh@1352 -- # free_mb=4096 00:46:23.198 4096 00:46:23.198 13:08:42 -- common/autotest_common.sh@1353 -- # echo 4096 00:46:23.198 13:08:42 -- host/fio.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 4096 00:46:23.456 9f5e93c8-07df-4649-b2ee-2ced1eff9545 00:46:23.456 13:08:42 -- host/fio.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:46:23.715 13:08:42 -- host/fio.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:46:23.974 13:08:43 -- host/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:46:23.974 13:08:43 -- host/fio.sh@59 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:46:23.974 13:08:43 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:46:23.974 13:08:43 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:46:23.974 13:08:43 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:46:23.974 13:08:43 -- common/autotest_common.sh@1318 -- # local sanitizers 00:46:23.974 13:08:43 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:46:23.974 13:08:43 -- common/autotest_common.sh@1320 -- # shift 00:46:23.974 13:08:43 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:46:23.974 13:08:43 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:46:23.974 13:08:43 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:46:23.974 13:08:43 -- common/autotest_common.sh@1324 -- # grep libasan 00:46:23.974 13:08:43 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:46:23.974 13:08:43 -- common/autotest_common.sh@1324 -- # asan_lib= 00:46:23.974 13:08:43 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:46:23.974 13:08:43 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:46:24.231 13:08:43 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:46:24.232 13:08:43 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:46:24.232 13:08:43 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:46:24.232 13:08:43 -- common/autotest_common.sh@1324 -- # asan_lib= 00:46:24.232 13:08:43 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:46:24.232 13:08:43 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:46:24.232 13:08:43 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:46:24.232 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:46:24.232 fio-3.35 00:46:24.232 Starting 1 thread 00:46:26.759 00:46:26.759 test: (groupid=0, jobs=1): err= 0: pid=94265: Mon Jul 22 13:08:45 2024 00:46:26.759 read: IOPS=6735, BW=26.3MiB/s (27.6MB/s)(52.8MiB/2008msec) 00:46:26.759 slat (nsec): min=1888, max=339335, avg=2728.48, stdev=4028.57 00:46:26.759 clat (usec): min=4169, max=16426, avg=10090.34, stdev=918.50 00:46:26.759 lat (usec): min=4178, max=16429, avg=10093.07, stdev=918.30 00:46:26.759 clat percentiles (usec): 00:46:26.759 | 1.00th=[ 8094], 5.00th=[ 8717], 10.00th=[ 8979], 20.00th=[ 9372], 00:46:26.759 | 30.00th=[ 9634], 40.00th=[ 9896], 50.00th=[10028], 60.00th=[10290], 00:46:26.759 | 70.00th=[10552], 80.00th=[10814], 90.00th=[11207], 95.00th=[11600], 00:46:26.759 | 99.00th=[12256], 99.50th=[12518], 99.90th=[14222], 99.95th=[15926], 00:46:26.759 | 99.99th=[16450] 00:46:26.759 bw ( KiB/s): min=25872, max=27464, per=99.92%, avg=26920.00, stdev=731.32, samples=4 00:46:26.759 iops : min= 6468, max= 6866, avg=6730.00, stdev=182.83, samples=4 00:46:26.759 write: IOPS=6739, BW=26.3MiB/s (27.6MB/s)(52.9MiB/2008msec); 0 zone resets 00:46:26.759 slat (usec): min=2, max=318, avg= 2.89, stdev= 3.46 00:46:26.759 clat (usec): min=2516, max=16483, avg=8832.44, stdev=809.52 00:46:26.759 lat (usec): min=2530, max=16485, avg=8835.33, stdev=809.45 00:46:26.759 clat percentiles (usec): 00:46:26.759 | 1.00th=[ 6980], 5.00th=[ 7570], 10.00th=[ 7898], 20.00th=[ 8225], 00:46:26.759 | 30.00th=[ 8455], 40.00th=[ 8586], 50.00th=[ 8848], 60.00th=[ 8979], 00:46:26.759 | 70.00th=[ 9241], 80.00th=[ 9503], 90.00th=[ 9765], 95.00th=[10028], 00:46:26.893 | 99.00th=[10552], 99.50th=[10814], 99.90th=[13960], 99.95th=[15401], 00:46:26.893 | 99.99th=[16450] 00:46:26.893 bw ( KiB/s): min=26824, max=27064, per=99.94%, avg=26942.00, stdev=106.81, samples=4 00:46:26.893 iops : min= 6706, max= 6766, avg=6735.50, stdev=26.70, samples=4 00:46:26.893 lat (msec) : 4=0.03%, 10=70.28%, 20=29.69% 00:46:26.893 cpu : usr=71.55%, sys=21.18%, ctx=9, majf=0, minf=5 00:46:26.893 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:46:26.893 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:26.893 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:46:26.893 issued rwts: total=13525,13533,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:26.893 latency : target=0, window=0, percentile=100.00%, depth=128 00:46:26.893 00:46:26.893 Run status group 0 (all jobs): 00:46:26.893 READ: bw=26.3MiB/s (27.6MB/s), 26.3MiB/s-26.3MiB/s (27.6MB/s-27.6MB/s), io=52.8MiB (55.4MB), run=2008-2008msec 00:46:26.893 WRITE: bw=26.3MiB/s (27.6MB/s), 26.3MiB/s-26.3MiB/s (27.6MB/s-27.6MB/s), io=52.9MiB (55.4MB), run=2008-2008msec 00:46:26.893 13:08:45 -- host/fio.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:46:26.893 13:08:46 -- host/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:46:27.150 13:08:46 -- host/fio.sh@64 -- # ls_nested_guid=eb78330e-c581-499f-b98f-10a2661649da 00:46:27.150 13:08:46 -- host/fio.sh@65 -- # get_lvs_free_mb eb78330e-c581-499f-b98f-10a2661649da 00:46:27.150 13:08:46 -- common/autotest_common.sh@1343 -- # local lvs_uuid=eb78330e-c581-499f-b98f-10a2661649da 00:46:27.150 13:08:46 -- common/autotest_common.sh@1344 -- # local lvs_info 00:46:27.150 13:08:46 -- common/autotest_common.sh@1345 -- # local fc 00:46:27.150 13:08:46 -- common/autotest_common.sh@1346 -- # local cs 00:46:27.150 13:08:46 -- common/autotest_common.sh@1347 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:46:27.408 13:08:46 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:46:27.408 { 00:46:27.408 "base_bdev": "Nvme0n1", 00:46:27.408 "block_size": 4096, 00:46:27.408 "cluster_size": 1073741824, 00:46:27.408 "free_clusters": 0, 00:46:27.408 "name": "lvs_0", 00:46:27.408 "total_data_clusters": 4, 00:46:27.408 "uuid": "52b0c3d4-ada2-47eb-a68f-bb19db0cdf78" 00:46:27.408 }, 00:46:27.408 { 00:46:27.408 "base_bdev": "9f5e93c8-07df-4649-b2ee-2ced1eff9545", 00:46:27.408 "block_size": 4096, 00:46:27.408 "cluster_size": 4194304, 00:46:27.408 "free_clusters": 1022, 00:46:27.408 "name": "lvs_n_0", 00:46:27.408 "total_data_clusters": 1022, 00:46:27.408 "uuid": "eb78330e-c581-499f-b98f-10a2661649da" 00:46:27.408 } 00:46:27.408 ]' 00:46:27.409 13:08:46 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="eb78330e-c581-499f-b98f-10a2661649da") .free_clusters' 00:46:27.409 13:08:46 -- common/autotest_common.sh@1348 -- # fc=1022 00:46:27.409 13:08:46 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="eb78330e-c581-499f-b98f-10a2661649da") .cluster_size' 00:46:27.409 13:08:46 -- common/autotest_common.sh@1349 -- # cs=4194304 00:46:27.409 13:08:46 -- common/autotest_common.sh@1352 -- # free_mb=4088 00:46:27.409 13:08:46 -- common/autotest_common.sh@1353 -- # echo 4088 00:46:27.409 4088 00:46:27.409 13:08:46 -- host/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:46:27.666 e07c08fa-df80-4ed7-b7f8-a5093efb9c7e 00:46:27.666 13:08:46 -- host/fio.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:46:27.666 13:08:47 -- host/fio.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:46:27.924 13:08:47 -- host/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:46:28.182 13:08:47 -- host/fio.sh@70 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:46:28.183 13:08:47 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:46:28.183 13:08:47 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:46:28.183 13:08:47 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:46:28.183 13:08:47 -- common/autotest_common.sh@1318 -- # local sanitizers 00:46:28.183 13:08:47 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:46:28.183 13:08:47 -- common/autotest_common.sh@1320 -- # shift 00:46:28.183 13:08:47 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:46:28.183 13:08:47 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:46:28.183 13:08:47 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:46:28.183 13:08:47 -- common/autotest_common.sh@1324 -- # grep libasan 00:46:28.183 13:08:47 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:46:28.183 13:08:47 -- common/autotest_common.sh@1324 -- # asan_lib= 00:46:28.183 13:08:47 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:46:28.183 13:08:47 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:46:28.183 13:08:47 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:46:28.183 13:08:47 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:46:28.183 13:08:47 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:46:28.183 13:08:47 -- common/autotest_common.sh@1324 -- # asan_lib= 00:46:28.183 13:08:47 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:46:28.183 13:08:47 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:46:28.183 13:08:47 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:46:28.440 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:46:28.440 fio-3.35 00:46:28.440 Starting 1 thread 00:46:30.965 00:46:30.965 test: (groupid=0, jobs=1): err= 0: pid=94384: Mon Jul 22 13:08:49 2024 00:46:30.965 read: IOPS=6119, BW=23.9MiB/s (25.1MB/s)(48.0MiB/2009msec) 00:46:30.965 slat (usec): min=2, max=456, avg= 2.98, stdev= 6.00 00:46:30.965 clat (usec): min=4441, max=18146, avg=11158.43, stdev=1056.68 00:46:30.965 lat (usec): min=4451, max=18149, avg=11161.42, stdev=1056.42 00:46:30.965 clat percentiles (usec): 00:46:30.965 | 1.00th=[ 8979], 5.00th=[ 9503], 10.00th=[ 9896], 20.00th=[10290], 00:46:30.965 | 30.00th=[10552], 40.00th=[10814], 50.00th=[11076], 60.00th=[11338], 00:46:30.965 | 70.00th=[11731], 80.00th=[11994], 90.00th=[12518], 95.00th=[12911], 00:46:30.965 | 99.00th=[13698], 99.50th=[14091], 99.90th=[17433], 99.95th=[17695], 00:46:30.965 | 99.99th=[17957] 00:46:30.965 bw ( KiB/s): min=23488, max=24920, per=99.91%, avg=24456.00, stdev=657.33, samples=4 00:46:30.965 iops : min= 5872, max= 6230, avg=6114.00, stdev=164.33, samples=4 00:46:30.965 write: IOPS=6099, BW=23.8MiB/s (25.0MB/s)(47.9MiB/2009msec); 0 zone resets 00:46:30.965 slat (usec): min=2, max=316, avg= 2.99, stdev= 3.87 00:46:30.965 clat (usec): min=2526, max=17869, avg=9700.63, stdev=917.00 00:46:30.965 lat (usec): min=2539, max=17872, avg=9703.62, stdev=916.94 00:46:30.965 clat percentiles (usec): 00:46:30.965 | 1.00th=[ 7701], 5.00th=[ 8291], 10.00th=[ 8586], 20.00th=[ 8979], 00:46:30.965 | 30.00th=[ 9241], 40.00th=[ 9503], 50.00th=[ 9634], 60.00th=[ 9896], 00:46:30.965 | 70.00th=[10159], 80.00th=[10421], 90.00th=[10814], 95.00th=[11076], 00:46:30.965 | 99.00th=[11731], 99.50th=[12125], 99.90th=[16450], 99.95th=[16909], 00:46:30.965 | 99.99th=[17957] 00:46:30.965 bw ( KiB/s): min=24192, max=24584, per=99.95%, avg=24386.00, stdev=168.36, samples=4 00:46:30.965 iops : min= 6048, max= 6146, avg=6096.50, stdev=42.09, samples=4 00:46:30.965 lat (msec) : 4=0.04%, 10=37.73%, 20=62.24% 00:46:30.965 cpu : usr=70.12%, sys=22.41%, ctx=17, majf=0, minf=5 00:46:30.965 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:46:30.965 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:30.965 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:46:30.965 issued rwts: total=12294,12254,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:30.965 latency : target=0, window=0, percentile=100.00%, depth=128 00:46:30.965 00:46:30.965 Run status group 0 (all jobs): 00:46:30.965 READ: bw=23.9MiB/s (25.1MB/s), 23.9MiB/s-23.9MiB/s (25.1MB/s-25.1MB/s), io=48.0MiB (50.4MB), run=2009-2009msec 00:46:30.965 WRITE: bw=23.8MiB/s (25.0MB/s), 23.8MiB/s-23.8MiB/s (25.0MB/s-25.0MB/s), io=47.9MiB (50.2MB), run=2009-2009msec 00:46:30.965 13:08:49 -- host/fio.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:46:30.965 13:08:50 -- host/fio.sh@74 -- # sync 00:46:30.965 13:08:50 -- host/fio.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:46:31.222 13:08:50 -- host/fio.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:46:31.479 13:08:50 -- host/fio.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:46:31.737 13:08:51 -- host/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:46:31.994 13:08:51 -- host/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:46:32.252 13:08:51 -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:46:32.252 13:08:51 -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:46:32.252 13:08:51 -- host/fio.sh@86 -- # nvmftestfini 00:46:32.252 13:08:51 -- nvmf/common.sh@476 -- # nvmfcleanup 00:46:32.252 13:08:51 -- nvmf/common.sh@116 -- # sync 00:46:32.252 13:08:51 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:46:32.252 13:08:51 -- nvmf/common.sh@119 -- # set +e 00:46:32.252 13:08:51 -- nvmf/common.sh@120 -- # for i in {1..20} 00:46:32.252 13:08:51 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:46:32.252 rmmod nvme_tcp 00:46:32.252 rmmod nvme_fabrics 00:46:32.252 rmmod nvme_keyring 00:46:32.252 13:08:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:46:32.252 13:08:51 -- nvmf/common.sh@123 -- # set -e 00:46:32.252 13:08:51 -- nvmf/common.sh@124 -- # return 0 00:46:32.252 13:08:51 -- nvmf/common.sh@477 -- # '[' -n 93939 ']' 00:46:32.252 13:08:51 -- nvmf/common.sh@478 -- # killprocess 93939 00:46:32.252 13:08:51 -- common/autotest_common.sh@926 -- # '[' -z 93939 ']' 00:46:32.252 13:08:51 -- common/autotest_common.sh@930 -- # kill -0 93939 00:46:32.252 13:08:51 -- common/autotest_common.sh@931 -- # uname 00:46:32.252 13:08:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:46:32.252 13:08:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 93939 00:46:32.252 killing process with pid 93939 00:46:32.252 13:08:51 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:46:32.252 13:08:51 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:46:32.252 13:08:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 93939' 00:46:32.252 13:08:51 -- common/autotest_common.sh@945 -- # kill 93939 00:46:32.252 13:08:51 -- common/autotest_common.sh@950 -- # wait 93939 00:46:32.509 13:08:51 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:46:32.509 13:08:51 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:46:32.509 13:08:51 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:46:32.509 13:08:51 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:46:32.509 13:08:51 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:46:32.509 13:08:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:46:32.509 13:08:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:46:32.509 13:08:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:46:32.509 13:08:51 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:46:32.509 00:46:32.509 real 0m18.353s 00:46:32.509 user 1m21.097s 00:46:32.509 sys 0m4.299s 00:46:32.509 13:08:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:46:32.509 13:08:51 -- common/autotest_common.sh@10 -- # set +x 00:46:32.509 ************************************ 00:46:32.509 END TEST nvmf_fio_host 00:46:32.509 ************************************ 00:46:32.509 13:08:51 -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:46:32.509 13:08:51 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:46:32.509 13:08:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:46:32.509 13:08:51 -- common/autotest_common.sh@10 -- # set +x 00:46:32.509 ************************************ 00:46:32.509 START TEST nvmf_failover 00:46:32.509 ************************************ 00:46:32.509 13:08:51 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:46:32.767 * Looking for test storage... 00:46:32.767 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:46:32.767 13:08:51 -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:46:32.767 13:08:51 -- nvmf/common.sh@7 -- # uname -s 00:46:32.767 13:08:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:46:32.767 13:08:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:46:32.767 13:08:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:46:32.767 13:08:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:46:32.767 13:08:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:46:32.767 13:08:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:46:32.767 13:08:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:46:32.767 13:08:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:46:32.767 13:08:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:46:32.767 13:08:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:46:32.767 13:08:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:864ae4a3-cd96-439b-8ef2-6dab4d992115 00:46:32.767 13:08:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=864ae4a3-cd96-439b-8ef2-6dab4d992115 00:46:32.767 13:08:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:46:32.767 13:08:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:46:32.767 13:08:51 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:46:32.767 13:08:51 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:46:32.767 13:08:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:46:32.767 13:08:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:46:32.767 13:08:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:46:32.767 13:08:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:32.767 13:08:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:32.767 13:08:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:32.767 13:08:51 -- paths/export.sh@5 -- # export PATH 00:46:32.767 13:08:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:32.767 13:08:51 -- nvmf/common.sh@46 -- # : 0 00:46:32.767 13:08:51 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:46:32.767 13:08:51 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:46:32.767 13:08:51 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:46:32.767 13:08:51 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:46:32.767 13:08:51 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:46:32.767 13:08:51 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:46:32.767 13:08:51 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:46:32.767 13:08:51 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:46:32.767 13:08:51 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:46:32.767 13:08:51 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:46:32.767 13:08:51 -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:46:32.767 13:08:51 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:46:32.767 13:08:51 -- host/failover.sh@18 -- # nvmftestinit 00:46:32.767 13:08:51 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:46:32.767 13:08:51 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:46:32.767 13:08:51 -- nvmf/common.sh@436 -- # prepare_net_devs 00:46:32.767 13:08:51 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:46:32.767 13:08:51 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:46:32.767 13:08:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:46:32.767 13:08:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:46:32.767 13:08:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:46:32.767 13:08:51 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:46:32.767 13:08:51 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:46:32.767 13:08:51 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:46:32.767 13:08:51 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:46:32.767 13:08:51 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:46:32.767 13:08:51 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:46:32.767 13:08:51 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:46:32.767 13:08:51 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:46:32.767 13:08:51 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:46:32.767 13:08:51 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:46:32.767 13:08:51 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:46:32.767 13:08:51 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:46:32.767 13:08:51 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:46:32.767 13:08:51 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:46:32.767 13:08:51 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:46:32.767 13:08:51 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:46:32.767 13:08:51 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:46:32.767 13:08:51 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:46:32.767 13:08:51 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:46:32.767 13:08:52 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:46:32.767 Cannot find device "nvmf_tgt_br" 00:46:32.767 13:08:52 -- nvmf/common.sh@154 -- # true 00:46:32.767 13:08:52 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:46:32.767 Cannot find device "nvmf_tgt_br2" 00:46:32.767 13:08:52 -- nvmf/common.sh@155 -- # true 00:46:32.767 13:08:52 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:46:32.767 13:08:52 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:46:32.767 Cannot find device "nvmf_tgt_br" 00:46:32.767 13:08:52 -- nvmf/common.sh@157 -- # true 00:46:32.767 13:08:52 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:46:32.767 Cannot find device "nvmf_tgt_br2" 00:46:32.767 13:08:52 -- nvmf/common.sh@158 -- # true 00:46:32.767 13:08:52 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:46:32.767 13:08:52 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:46:32.767 13:08:52 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:46:32.767 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:46:32.767 13:08:52 -- nvmf/common.sh@161 -- # true 00:46:32.767 13:08:52 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:46:32.767 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:46:32.767 13:08:52 -- nvmf/common.sh@162 -- # true 00:46:32.767 13:08:52 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:46:32.767 13:08:52 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:46:32.767 13:08:52 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:46:32.767 13:08:52 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:46:32.767 13:08:52 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:46:32.767 13:08:52 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:46:32.767 13:08:52 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:46:32.767 13:08:52 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:46:32.767 13:08:52 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:46:33.027 13:08:52 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:46:33.027 13:08:52 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:46:33.027 13:08:52 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:46:33.027 13:08:52 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:46:33.027 13:08:52 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:46:33.027 13:08:52 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:46:33.027 13:08:52 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:46:33.027 13:08:52 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:46:33.027 13:08:52 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:46:33.027 13:08:52 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:46:33.027 13:08:52 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:46:33.027 13:08:52 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:46:33.027 13:08:52 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:46:33.027 13:08:52 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:46:33.027 13:08:52 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:46:33.027 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:46:33.027 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.092 ms 00:46:33.027 00:46:33.027 --- 10.0.0.2 ping statistics --- 00:46:33.027 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:33.027 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:46:33.027 13:08:52 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:46:33.027 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:46:33.027 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:46:33.027 00:46:33.027 --- 10.0.0.3 ping statistics --- 00:46:33.027 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:33.027 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:46:33.027 13:08:52 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:46:33.027 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:46:33.027 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.053 ms 00:46:33.028 00:46:33.028 --- 10.0.0.1 ping statistics --- 00:46:33.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:33.028 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:46:33.028 13:08:52 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:46:33.028 13:08:52 -- nvmf/common.sh@421 -- # return 0 00:46:33.028 13:08:52 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:46:33.028 13:08:52 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:46:33.028 13:08:52 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:46:33.028 13:08:52 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:46:33.028 13:08:52 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:46:33.028 13:08:52 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:46:33.028 13:08:52 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:46:33.028 13:08:52 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:46:33.028 13:08:52 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:46:33.028 13:08:52 -- common/autotest_common.sh@712 -- # xtrace_disable 00:46:33.028 13:08:52 -- common/autotest_common.sh@10 -- # set +x 00:46:33.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:33.028 13:08:52 -- nvmf/common.sh@469 -- # nvmfpid=94649 00:46:33.028 13:08:52 -- nvmf/common.sh@470 -- # waitforlisten 94649 00:46:33.028 13:08:52 -- common/autotest_common.sh@819 -- # '[' -z 94649 ']' 00:46:33.028 13:08:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:33.028 13:08:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:46:33.028 13:08:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:33.028 13:08:52 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:46:33.028 13:08:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:46:33.028 13:08:52 -- common/autotest_common.sh@10 -- # set +x 00:46:33.028 [2024-07-22 13:08:52.387952] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:46:33.028 [2024-07-22 13:08:52.388036] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:46:33.287 [2024-07-22 13:08:52.526434] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:46:33.287 [2024-07-22 13:08:52.589930] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:46:33.287 [2024-07-22 13:08:52.590277] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:46:33.287 [2024-07-22 13:08:52.590303] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:46:33.287 [2024-07-22 13:08:52.590312] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:46:33.287 [2024-07-22 13:08:52.590438] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:46:33.287 [2024-07-22 13:08:52.590970] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:46:33.287 [2024-07-22 13:08:52.590983] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:46:34.220 13:08:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:46:34.220 13:08:53 -- common/autotest_common.sh@852 -- # return 0 00:46:34.220 13:08:53 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:46:34.220 13:08:53 -- common/autotest_common.sh@718 -- # xtrace_disable 00:46:34.220 13:08:53 -- common/autotest_common.sh@10 -- # set +x 00:46:34.220 13:08:53 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:46:34.220 13:08:53 -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:46:34.220 [2024-07-22 13:08:53.597990] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:46:34.220 13:08:53 -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:46:34.478 Malloc0 00:46:34.478 13:08:53 -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:46:34.736 13:08:54 -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:46:34.993 13:08:54 -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:46:35.251 [2024-07-22 13:08:54.474478] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:46:35.251 13:08:54 -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:46:35.508 [2024-07-22 13:08:54.730716] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:46:35.508 13:08:54 -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:46:35.766 [2024-07-22 13:08:54.938871] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:46:35.766 13:08:54 -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:46:35.766 13:08:54 -- host/failover.sh@31 -- # bdevperf_pid=94765 00:46:35.766 13:08:54 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:46:35.766 13:08:54 -- host/failover.sh@34 -- # waitforlisten 94765 /var/tmp/bdevperf.sock 00:46:35.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:46:35.766 13:08:54 -- common/autotest_common.sh@819 -- # '[' -z 94765 ']' 00:46:35.766 13:08:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:46:35.766 13:08:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:46:35.766 13:08:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:46:35.766 13:08:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:46:35.766 13:08:54 -- common/autotest_common.sh@10 -- # set +x 00:46:36.697 13:08:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:46:36.697 13:08:55 -- common/autotest_common.sh@852 -- # return 0 00:46:36.697 13:08:55 -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:46:36.954 NVMe0n1 00:46:36.954 13:08:56 -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:46:37.214 00:46:37.214 13:08:56 -- host/failover.sh@39 -- # run_test_pid=94814 00:46:37.214 13:08:56 -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:46:37.214 13:08:56 -- host/failover.sh@41 -- # sleep 1 00:46:38.148 13:08:57 -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:46:38.406 [2024-07-22 13:08:57.820314] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x674850 is same with the state(5) to be set 00:46:38.406 [2024-07-22 13:08:57.820370] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x674850 is same with the state(5) to be set 00:46:38.406 [2024-07-22 13:08:57.820398] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x674850 is same with the state(5) to be set 00:46:38.406 [2024-07-22 13:08:57.820407] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x674850 is same with the state(5) to be set 00:46:38.406 [2024-07-22 13:08:57.820415] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x674850 is same with the state(5) to be set 00:46:38.406 [2024-07-22 13:08:57.820423] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x674850 is same with the state(5) to be set 00:46:38.406 [2024-07-22 13:08:57.820431] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x674850 is same with the state(5) to be set 00:46:38.406 [2024-07-22 13:08:57.820439] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x674850 is same with the state(5) to be set 00:46:38.407 [2024-07-22 13:08:57.820446] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x674850 is same with the state(5) to be set 00:46:38.407 [2024-07-22 13:08:57.820454] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x674850 is same with the state(5) to be set 00:46:38.407 [2024-07-22 13:08:57.820462] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x674850 is same with the state(5) to be set 00:46:38.407 [2024-07-22 13:08:57.820470] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x674850 is same with the state(5) to be set 00:46:38.407 [2024-07-22 13:08:57.820477] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x674850 is same with the state(5) to be set 00:46:38.407 [2024-07-22 13:08:57.820485] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x674850 is same with the state(5) to be set 00:46:38.407 [2024-07-22 13:08:57.820494] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x674850 is same with the state(5) to be set 00:46:38.407 [2024-07-22 13:08:57.820502] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x674850 is same with the state(5) to be set 00:46:38.407 [2024-07-22 13:08:57.820509] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x674850 is same with the state(5) to be set 00:46:38.407 [2024-07-22 13:08:57.820517] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x674850 is same with the state(5) to be set 00:46:38.407 [2024-07-22 13:08:57.820524] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x674850 is same with the state(5) to be set 00:46:38.407 [2024-07-22 13:08:57.820547] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x674850 is same with the state(5) to be set 00:46:38.407 [2024-07-22 13:08:57.820554] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x674850 is same with the state(5) to be set 00:46:38.407 [2024-07-22 13:08:57.820562] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x674850 is same with the state(5) to be set 00:46:38.407 [2024-07-22 13:08:57.820569] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x674850 is same with the state(5) to be set 00:46:38.407 [2024-07-22 13:08:57.820576] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x674850 is same with the state(5) to be set 00:46:38.407 [2024-07-22 13:08:57.820584] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x674850 is same with the state(5) to be set 00:46:38.407 [2024-07-22 13:08:57.820591] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x674850 is same with the state(5) to be set 00:46:38.407 [2024-07-22 13:08:57.820598] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x674850 is same with the state(5) to be set 00:46:38.407 [2024-07-22 13:08:57.820622] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x674850 is same with the state(5) to be set 00:46:38.407 [2024-07-22 13:08:57.820645] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x674850 is same with the state(5) to be set 00:46:38.407 [2024-07-22 13:08:57.820653] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x674850 is same with the state(5) to be set 00:46:38.407 [2024-07-22 13:08:57.820661] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x674850 is same with the state(5) to be set 00:46:38.407 [2024-07-22 13:08:57.820669] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x674850 is same with the state(5) to be set 00:46:38.407 [2024-07-22 13:08:57.820678] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x674850 is same with the state(5) to be set 00:46:38.407 [2024-07-22 13:08:57.820686] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x674850 is same with the state(5) to be set 00:46:38.407 [2024-07-22 13:08:57.820694] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x674850 is same with the state(5) to be set 00:46:38.407 [2024-07-22 13:08:57.820703] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x674850 is same with the state(5) to be set 00:46:38.407 [2024-07-22 13:08:57.820711] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x674850 is same with the state(5) to be set 00:46:38.407 [2024-07-22 13:08:57.820720] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x674850 is same with the state(5) to be set 00:46:38.407 [2024-07-22 13:08:57.820728] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x674850 is same with the state(5) to be set 00:46:38.407 [2024-07-22 13:08:57.820736] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x674850 is same with the state(5) to be set 00:46:38.407 [2024-07-22 13:08:57.820744] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x674850 is same with the state(5) to be set 00:46:38.407 [2024-07-22 13:08:57.820752] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x674850 is same with the state(5) to be set 00:46:38.407 [2024-07-22 13:08:57.820760] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x674850 is same with the state(5) to be set 00:46:38.407 [2024-07-22 13:08:57.820768] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x674850 is same with the state(5) to be set 00:46:38.407 [2024-07-22 13:08:57.820776] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x674850 is same with the state(5) to be set 00:46:38.407 [2024-07-22 13:08:57.820784] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x674850 is same with the state(5) to be set 00:46:38.407 [2024-07-22 13:08:57.820792] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x674850 is same with the state(5) to be set 00:46:38.407 [2024-07-22 13:08:57.820800] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x674850 is same with the state(5) to be set 00:46:38.407 [2024-07-22 13:08:57.820808] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x674850 is same with the state(5) to be set 00:46:38.407 [2024-07-22 13:08:57.820816] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x674850 is same with the state(5) to be set 00:46:38.407 [2024-07-22 13:08:57.820824] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x674850 is same with the state(5) to be set 00:46:38.407 [2024-07-22 13:08:57.820831] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x674850 is same with the state(5) to be set 00:46:38.665 13:08:57 -- host/failover.sh@45 -- # sleep 3 00:46:41.944 13:09:00 -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:46:41.944 00:46:41.944 13:09:01 -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:46:41.944 [2024-07-22 13:09:01.352344] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x675f90 is same with the state(5) to be set 00:46:41.944 [2024-07-22 13:09:01.352393] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x675f90 is same with the state(5) to be set 00:46:41.944 [2024-07-22 13:09:01.352422] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x675f90 is same with the state(5) to be set 00:46:41.944 [2024-07-22 13:09:01.352431] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x675f90 is same with the state(5) to be set 00:46:41.944 [2024-07-22 13:09:01.352439] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x675f90 is same with the state(5) to be set 00:46:41.944 [2024-07-22 13:09:01.352447] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x675f90 is same with the state(5) to be set 00:46:41.944 [2024-07-22 13:09:01.352456] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x675f90 is same with the state(5) to be set 00:46:41.944 [2024-07-22 13:09:01.352464] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x675f90 is same with the state(5) to be set 00:46:41.944 [2024-07-22 13:09:01.352473] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x675f90 is same with the state(5) to be set 00:46:41.944 [2024-07-22 13:09:01.352481] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x675f90 is same with the state(5) to be set 00:46:41.944 [2024-07-22 13:09:01.352490] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x675f90 is same with the state(5) to be set 00:46:41.944 [2024-07-22 13:09:01.352497] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x675f90 is same with the state(5) to be set 00:46:41.944 [2024-07-22 13:09:01.352520] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x675f90 is same with the state(5) to be set 00:46:41.944 [2024-07-22 13:09:01.352544] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x675f90 is same with the state(5) to be set 00:46:41.944 [2024-07-22 13:09:01.352552] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x675f90 is same with the state(5) to be set 00:46:41.944 [2024-07-22 13:09:01.352559] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x675f90 is same with the state(5) to be set 00:46:41.944 [2024-07-22 13:09:01.352566] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x675f90 is same with the state(5) to be set 00:46:41.945 [2024-07-22 13:09:01.352573] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x675f90 is same with the state(5) to be set 00:46:41.945 [2024-07-22 13:09:01.352582] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x675f90 is same with the state(5) to be set 00:46:41.945 [2024-07-22 13:09:01.352589] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x675f90 is same with the state(5) to be set 00:46:41.945 [2024-07-22 13:09:01.352597] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x675f90 is same with the state(5) to be set 00:46:41.945 [2024-07-22 13:09:01.352604] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x675f90 is same with the state(5) to be set 00:46:41.945 [2024-07-22 13:09:01.352612] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x675f90 is same with the state(5) to be set 00:46:41.945 [2024-07-22 13:09:01.352619] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x675f90 is same with the state(5) to be set 00:46:41.945 [2024-07-22 13:09:01.352627] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x675f90 is same with the state(5) to be set 00:46:41.945 [2024-07-22 13:09:01.352635] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x675f90 is same with the state(5) to be set 00:46:41.945 [2024-07-22 13:09:01.352642] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x675f90 is same with the state(5) to be set 00:46:41.945 [2024-07-22 13:09:01.352650] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x675f90 is same with the state(5) to be set 00:46:41.945 [2024-07-22 13:09:01.352659] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x675f90 is same with the state(5) to be set 00:46:41.945 [2024-07-22 13:09:01.352666] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x675f90 is same with the state(5) to be set 00:46:41.945 [2024-07-22 13:09:01.352674] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x675f90 is same with the state(5) to be set 00:46:41.945 [2024-07-22 13:09:01.352681] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x675f90 is same with the state(5) to be set 00:46:41.945 [2024-07-22 13:09:01.352689] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x675f90 is same with the state(5) to be set 00:46:41.945 [2024-07-22 13:09:01.352696] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x675f90 is same with the state(5) to be set 00:46:41.945 [2024-07-22 13:09:01.352704] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x675f90 is same with the state(5) to be set 00:46:41.945 [2024-07-22 13:09:01.352711] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x675f90 is same with the state(5) to be set 00:46:41.945 [2024-07-22 13:09:01.352719] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x675f90 is same with the state(5) to be set 00:46:41.945 [2024-07-22 13:09:01.352727] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x675f90 is same with the state(5) to be set 00:46:41.945 [2024-07-22 13:09:01.352735] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x675f90 is same with the state(5) to be set 00:46:41.945 [2024-07-22 13:09:01.352743] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x675f90 is same with the state(5) to be set 00:46:41.945 [2024-07-22 13:09:01.352750] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x675f90 is same with the state(5) to be set 00:46:41.945 [2024-07-22 13:09:01.352758] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x675f90 is same with the state(5) to be set 00:46:41.945 [2024-07-22 13:09:01.352765] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x675f90 is same with the state(5) to be set 00:46:41.945 [2024-07-22 13:09:01.352772] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x675f90 is same with the state(5) to be set 00:46:41.945 [2024-07-22 13:09:01.352780] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x675f90 is same with the state(5) to be set 00:46:41.945 [2024-07-22 13:09:01.352787] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x675f90 is same with the state(5) to be set 00:46:41.945 [2024-07-22 13:09:01.352795] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x675f90 is same with the state(5) to be set 00:46:41.945 [2024-07-22 13:09:01.352802] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x675f90 is same with the state(5) to be set 00:46:41.945 [2024-07-22 13:09:01.352809] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x675f90 is same with the state(5) to be set 00:46:41.945 [2024-07-22 13:09:01.352817] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x675f90 is same with the state(5) to be set 00:46:41.945 [2024-07-22 13:09:01.352825] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x675f90 is same with the state(5) to be set 00:46:41.945 [2024-07-22 13:09:01.352833] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x675f90 is same with the state(5) to be set 00:46:41.945 [2024-07-22 13:09:01.352840] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x675f90 is same with the state(5) to be set 00:46:41.945 [2024-07-22 13:09:01.352848] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x675f90 is same with the state(5) to be set 00:46:41.945 [2024-07-22 13:09:01.352855] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x675f90 is same with the state(5) to be set 00:46:41.945 [2024-07-22 13:09:01.352863] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x675f90 is same with the state(5) to be set 00:46:42.202 13:09:01 -- host/failover.sh@50 -- # sleep 3 00:46:45.531 13:09:04 -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:46:45.531 [2024-07-22 13:09:04.606662] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:46:45.531 13:09:04 -- host/failover.sh@55 -- # sleep 1 00:46:46.502 13:09:05 -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:46:46.502 [2024-07-22 13:09:05.878845] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x677010 is same with the state(5) to be set 00:46:46.502 [2024-07-22 13:09:05.878912] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x677010 is same with the state(5) to be set 00:46:46.502 [2024-07-22 13:09:05.878935] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x677010 is same with the state(5) to be set 00:46:46.502 [2024-07-22 13:09:05.878943] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x677010 is same with the state(5) to be set 00:46:46.502 [2024-07-22 13:09:05.878951] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x677010 is same with the state(5) to be set 00:46:46.502 [2024-07-22 13:09:05.878959] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x677010 is same with the state(5) to be set 00:46:46.502 [2024-07-22 13:09:05.878966] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x677010 is same with the state(5) to be set 00:46:46.502 [2024-07-22 13:09:05.878974] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x677010 is same with the state(5) to be set 00:46:46.503 [2024-07-22 13:09:05.878982] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x677010 is same with the state(5) to be set 00:46:46.503 [2024-07-22 13:09:05.878990] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x677010 is same with the state(5) to be set 00:46:46.503 [2024-07-22 13:09:05.878998] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x677010 is same with the state(5) to be set 00:46:46.503 [2024-07-22 13:09:05.879005] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x677010 is same with the state(5) to be set 00:46:46.503 [2024-07-22 13:09:05.879013] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x677010 is same with the state(5) to be set 00:46:46.503 [2024-07-22 13:09:05.879035] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x677010 is same with the state(5) to be set 00:46:46.503 [2024-07-22 13:09:05.879042] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x677010 is same with the state(5) to be set 00:46:46.503 [2024-07-22 13:09:05.879049] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x677010 is same with the state(5) to be set 00:46:46.503 [2024-07-22 13:09:05.879057] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x677010 is same with the state(5) to be set 00:46:46.503 [2024-07-22 13:09:05.879064] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x677010 is same with the state(5) to be set 00:46:46.503 [2024-07-22 13:09:05.879072] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x677010 is same with the state(5) to be set 00:46:46.503 [2024-07-22 13:09:05.879079] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x677010 is same with the state(5) to be set 00:46:46.503 [2024-07-22 13:09:05.879087] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x677010 is same with the state(5) to be set 00:46:46.503 [2024-07-22 13:09:05.879094] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x677010 is same with the state(5) to be set 00:46:46.503 [2024-07-22 13:09:05.879102] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x677010 is same with the state(5) to be set 00:46:46.503 [2024-07-22 13:09:05.879111] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x677010 is same with the state(5) to be set 00:46:46.503 [2024-07-22 13:09:05.879119] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x677010 is same with the state(5) to be set 00:46:46.503 [2024-07-22 13:09:05.879127] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x677010 is same with the state(5) to be set 00:46:46.503 [2024-07-22 13:09:05.879134] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x677010 is same with the state(5) to be set 00:46:46.503 [2024-07-22 13:09:05.879153] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x677010 is same with the state(5) to be set 00:46:46.503 [2024-07-22 13:09:05.879178] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x677010 is same with the state(5) to be set 00:46:46.503 [2024-07-22 13:09:05.879187] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x677010 is same with the state(5) to be set 00:46:46.503 [2024-07-22 13:09:05.879196] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x677010 is same with the state(5) to be set 00:46:46.503 [2024-07-22 13:09:05.879223] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x677010 is same with the state(5) to be set 00:46:46.503 [2024-07-22 13:09:05.879234] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x677010 is same with the state(5) to be set 00:46:46.503 [2024-07-22 13:09:05.879243] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x677010 is same with the state(5) to be set 00:46:46.503 [2024-07-22 13:09:05.879251] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x677010 is same with the state(5) to be set 00:46:46.503 [2024-07-22 13:09:05.879260] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x677010 is same with the state(5) to be set 00:46:46.503 [2024-07-22 13:09:05.879269] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x677010 is same with the state(5) to be set 00:46:46.503 [2024-07-22 13:09:05.879278] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x677010 is same with the state(5) to be set 00:46:46.503 [2024-07-22 13:09:05.879286] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x677010 is same with the state(5) to be set 00:46:46.503 [2024-07-22 13:09:05.879295] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x677010 is same with the state(5) to be set 00:46:46.503 [2024-07-22 13:09:05.879303] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x677010 is same with the state(5) to be set 00:46:46.503 [2024-07-22 13:09:05.879312] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x677010 is same with the state(5) to be set 00:46:46.503 [2024-07-22 13:09:05.879320] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x677010 is same with the state(5) to be set 00:46:46.503 [2024-07-22 13:09:05.879328] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x677010 is same with the state(5) to be set 00:46:46.503 [2024-07-22 13:09:05.879337] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x677010 is same with the state(5) to be set 00:46:46.503 [2024-07-22 13:09:05.879345] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x677010 is same with the state(5) to be set 00:46:46.503 [2024-07-22 13:09:05.879353] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x677010 is same with the state(5) to be set 00:46:46.503 [2024-07-22 13:09:05.879361] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x677010 is same with the state(5) to be set 00:46:46.503 [2024-07-22 13:09:05.879370] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x677010 is same with the state(5) to be set 00:46:46.503 [2024-07-22 13:09:05.879378] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x677010 is same with the state(5) to be set 00:46:46.503 [2024-07-22 13:09:05.879386] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x677010 is same with the state(5) to be set 00:46:46.503 [2024-07-22 13:09:05.879394] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x677010 is same with the state(5) to be set 00:46:46.503 [2024-07-22 13:09:05.879402] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x677010 is same with the state(5) to be set 00:46:46.503 [2024-07-22 13:09:05.879411] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x677010 is same with the state(5) to be set 00:46:46.503 [2024-07-22 13:09:05.879419] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x677010 is same with the state(5) to be set 00:46:46.503 [2024-07-22 13:09:05.879427] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x677010 is same with the state(5) to be set 00:46:46.503 [2024-07-22 13:09:05.879435] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x677010 is same with the state(5) to be set 00:46:46.503 [2024-07-22 13:09:05.879443] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x677010 is same with the state(5) to be set 00:46:46.503 [2024-07-22 13:09:05.879451] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x677010 is same with the state(5) to be set 00:46:46.503 [2024-07-22 13:09:05.879459] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x677010 is same with the state(5) to be set 00:46:46.503 [2024-07-22 13:09:05.879467] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x677010 is same with the state(5) to be set 00:46:46.503 [2024-07-22 13:09:05.879475] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x677010 is same with the state(5) to be set 00:46:46.503 [2024-07-22 13:09:05.879500] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x677010 is same with the state(5) to be set 00:46:46.503 [2024-07-22 13:09:05.879524] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x677010 is same with the state(5) to be set 00:46:46.503 [2024-07-22 13:09:05.879547] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x677010 is same with the state(5) to be set 00:46:46.503 [2024-07-22 13:09:05.879556] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x677010 is same with the state(5) to be set 00:46:46.503 [2024-07-22 13:09:05.879564] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x677010 is same with the state(5) to be set 00:46:46.503 [2024-07-22 13:09:05.879571] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x677010 is same with the state(5) to be set 00:46:46.503 [2024-07-22 13:09:05.879580] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x677010 is same with the state(5) to be set 00:46:46.503 [2024-07-22 13:09:05.879588] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x677010 is same with the state(5) to be set 00:46:46.503 [2024-07-22 13:09:05.879596] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x677010 is same with the state(5) to be set 00:46:46.503 [2024-07-22 13:09:05.879604] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x677010 is same with the state(5) to be set 00:46:46.503 [2024-07-22 13:09:05.879612] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x677010 is same with the state(5) to be set 00:46:46.503 [2024-07-22 13:09:05.879619] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x677010 is same with the state(5) to be set 00:46:46.503 [2024-07-22 13:09:05.879627] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x677010 is same with the state(5) to be set 00:46:46.503 [2024-07-22 13:09:05.879636] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x677010 is same with the state(5) to be set 00:46:46.503 [2024-07-22 13:09:05.879644] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x677010 is same with the state(5) to be set 00:46:46.503 [2024-07-22 13:09:05.879652] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x677010 is same with the state(5) to be set 00:46:46.503 [2024-07-22 13:09:05.879660] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x677010 is same with the state(5) to be set 00:46:46.503 13:09:05 -- host/failover.sh@59 -- # wait 94814 00:46:53.063 0 00:46:53.063 13:09:11 -- host/failover.sh@61 -- # killprocess 94765 00:46:53.063 13:09:11 -- common/autotest_common.sh@926 -- # '[' -z 94765 ']' 00:46:53.063 13:09:11 -- common/autotest_common.sh@930 -- # kill -0 94765 00:46:53.063 13:09:11 -- common/autotest_common.sh@931 -- # uname 00:46:53.063 13:09:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:46:53.063 13:09:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 94765 00:46:53.063 killing process with pid 94765 00:46:53.063 13:09:11 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:46:53.063 13:09:11 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:46:53.063 13:09:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 94765' 00:46:53.063 13:09:11 -- common/autotest_common.sh@945 -- # kill 94765 00:46:53.063 13:09:11 -- common/autotest_common.sh@950 -- # wait 94765 00:46:53.063 13:09:11 -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:46:53.063 [2024-07-22 13:08:54.998113] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:46:53.063 [2024-07-22 13:08:54.998311] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94765 ] 00:46:53.063 [2024-07-22 13:08:55.132518] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:53.063 [2024-07-22 13:08:55.191063] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:46:53.063 Running I/O for 15 seconds... 00:46:53.063 [2024-07-22 13:08:57.821107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:2376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.063 [2024-07-22 13:08:57.821149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.063 [2024-07-22 13:08:57.821177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:2408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.063 [2024-07-22 13:08:57.821192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.063 [2024-07-22 13:08:57.821208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:2416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.063 [2024-07-22 13:08:57.821222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.063 [2024-07-22 13:08:57.821249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:1760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.063 [2024-07-22 13:08:57.821264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.063 [2024-07-22 13:08:57.821278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:1776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.063 [2024-07-22 13:08:57.821291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.063 [2024-07-22 13:08:57.821306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:1784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.063 [2024-07-22 13:08:57.821319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.063 [2024-07-22 13:08:57.821334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:1792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.063 [2024-07-22 13:08:57.821347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.063 [2024-07-22 13:08:57.821361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:1800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.063 [2024-07-22 13:08:57.821374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.063 [2024-07-22 13:08:57.821389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:1808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.063 [2024-07-22 13:08:57.821402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.063 [2024-07-22 13:08:57.821417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:1816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.063 [2024-07-22 13:08:57.821430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.063 [2024-07-22 13:08:57.821445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:1832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.063 [2024-07-22 13:08:57.821458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.063 [2024-07-22 13:08:57.821493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:2424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.063 [2024-07-22 13:08:57.821507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.063 [2024-07-22 13:08:57.821522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.063 [2024-07-22 13:08:57.821535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.063 [2024-07-22 13:08:57.821549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:2456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.063 [2024-07-22 13:08:57.821562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.063 [2024-07-22 13:08:57.821577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.063 [2024-07-22 13:08:57.821589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.063 [2024-07-22 13:08:57.821604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:2472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.063 [2024-07-22 13:08:57.821623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.063 [2024-07-22 13:08:57.821639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:2488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.063 [2024-07-22 13:08:57.821669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.063 [2024-07-22 13:08:57.821684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.063 [2024-07-22 13:08:57.821697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.063 [2024-07-22 13:08:57.821712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:1864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.063 [2024-07-22 13:08:57.821725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.063 [2024-07-22 13:08:57.821740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.063 [2024-07-22 13:08:57.821753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.063 [2024-07-22 13:08:57.821768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:1928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.063 [2024-07-22 13:08:57.821781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.064 [2024-07-22 13:08:57.821796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:1936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.064 [2024-07-22 13:08:57.821809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.064 [2024-07-22 13:08:57.821824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:1968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.064 [2024-07-22 13:08:57.821837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.064 [2024-07-22 13:08:57.821852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:1992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.064 [2024-07-22 13:08:57.821876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.064 [2024-07-22 13:08:57.821892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:2008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.064 [2024-07-22 13:08:57.821905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.064 [2024-07-22 13:08:57.821920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:2528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.064 [2024-07-22 13:08:57.821933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.064 [2024-07-22 13:08:57.821948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.064 [2024-07-22 13:08:57.821962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.064 [2024-07-22 13:08:57.821976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:2544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.064 [2024-07-22 13:08:57.821989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.064 [2024-07-22 13:08:57.822004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:2016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.064 [2024-07-22 13:08:57.822017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.064 [2024-07-22 13:08:57.822032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:2032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.064 [2024-07-22 13:08:57.822045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.064 [2024-07-22 13:08:57.822060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:2040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.064 [2024-07-22 13:08:57.822073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.064 [2024-07-22 13:08:57.822088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:2072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.064 [2024-07-22 13:08:57.822107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.064 [2024-07-22 13:08:57.822122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:2080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.064 [2024-07-22 13:08:57.822136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.064 [2024-07-22 13:08:57.822160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:2096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.064 [2024-07-22 13:08:57.822175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.064 [2024-07-22 13:08:57.822191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:2104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.064 [2024-07-22 13:08:57.822204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.064 [2024-07-22 13:08:57.822219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:2112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.064 [2024-07-22 13:08:57.822232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.064 [2024-07-22 13:08:57.822247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:2560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.064 [2024-07-22 13:08:57.822268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.064 [2024-07-22 13:08:57.822284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:2576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.064 [2024-07-22 13:08:57.822297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.064 [2024-07-22 13:08:57.822312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:2584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.064 [2024-07-22 13:08:57.822325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.064 [2024-07-22 13:08:57.822340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:2608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.064 [2024-07-22 13:08:57.822353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.064 [2024-07-22 13:08:57.822368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:2624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.064 [2024-07-22 13:08:57.822381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.064 [2024-07-22 13:08:57.822398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:2632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:53.064 [2024-07-22 13:08:57.822417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.064 [2024-07-22 13:08:57.822432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:2640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:53.064 [2024-07-22 13:08:57.822445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.064 [2024-07-22 13:08:57.822460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:2648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:53.064 [2024-07-22 13:08:57.822474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.064 [2024-07-22 13:08:57.822488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:2656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:53.064 [2024-07-22 13:08:57.822502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.064 [2024-07-22 13:08:57.822517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:2664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.064 [2024-07-22 13:08:57.822530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.064 [2024-07-22 13:08:57.822545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:2672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.064 [2024-07-22 13:08:57.822558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.064 [2024-07-22 13:08:57.822582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:2680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:53.064 [2024-07-22 13:08:57.822612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.064 [2024-07-22 13:08:57.822627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:2688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:53.064 [2024-07-22 13:08:57.822640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.064 [2024-07-22 13:08:57.822662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:2696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.064 [2024-07-22 13:08:57.822676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.064 [2024-07-22 13:08:57.822691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:2704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:53.064 [2024-07-22 13:08:57.822704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.064 [2024-07-22 13:08:57.822718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:2712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:53.064 [2024-07-22 13:08:57.822731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.064 [2024-07-22 13:08:57.822746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:2720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.064 [2024-07-22 13:08:57.822759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.064 [2024-07-22 13:08:57.822774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:2728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.064 [2024-07-22 13:08:57.822787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.064 [2024-07-22 13:08:57.822803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:2736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.064 [2024-07-22 13:08:57.822816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.064 [2024-07-22 13:08:57.822831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:2744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:53.064 [2024-07-22 13:08:57.822844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.064 [2024-07-22 13:08:57.822859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:2752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.064 [2024-07-22 13:08:57.822873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.064 [2024-07-22 13:08:57.822888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:2760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:53.064 [2024-07-22 13:08:57.822902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.064 [2024-07-22 13:08:57.822916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:2768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.064 [2024-07-22 13:08:57.822929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.064 [2024-07-22 13:08:57.822944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:2776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:53.064 [2024-07-22 13:08:57.822958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.064 [2024-07-22 13:08:57.822973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:2784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.065 [2024-07-22 13:08:57.822986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.065 [2024-07-22 13:08:57.823000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:2792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:53.065 [2024-07-22 13:08:57.823020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.065 [2024-07-22 13:08:57.823045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:2800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.065 [2024-07-22 13:08:57.823058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.065 [2024-07-22 13:08:57.823073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:2808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:53.065 [2024-07-22 13:08:57.823091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.065 [2024-07-22 13:08:57.823106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:53.065 [2024-07-22 13:08:57.823120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.065 [2024-07-22 13:08:57.823135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:2824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:53.065 [2024-07-22 13:08:57.823159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.065 [2024-07-22 13:08:57.823174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.065 [2024-07-22 13:08:57.823187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.065 [2024-07-22 13:08:57.823202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:2840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.065 [2024-07-22 13:08:57.823215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.065 [2024-07-22 13:08:57.823231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:2144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.065 [2024-07-22 13:08:57.823245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.065 [2024-07-22 13:08:57.823260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:2152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.065 [2024-07-22 13:08:57.823272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.065 [2024-07-22 13:08:57.823288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:2176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.065 [2024-07-22 13:08:57.823301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.065 [2024-07-22 13:08:57.823316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.065 [2024-07-22 13:08:57.823329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.065 [2024-07-22 13:08:57.823344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:2200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.065 [2024-07-22 13:08:57.823357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.065 [2024-07-22 13:08:57.823372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:2232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.065 [2024-07-22 13:08:57.823384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.065 [2024-07-22 13:08:57.823399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:2248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.065 [2024-07-22 13:08:57.823419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.065 [2024-07-22 13:08:57.823435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:2256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.065 [2024-07-22 13:08:57.823449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.065 [2024-07-22 13:08:57.823463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:53.065 [2024-07-22 13:08:57.823476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.065 [2024-07-22 13:08:57.823507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:2856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.065 [2024-07-22 13:08:57.823520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.065 [2024-07-22 13:08:57.823534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:2864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:53.065 [2024-07-22 13:08:57.823547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.065 [2024-07-22 13:08:57.823562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:2872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.065 [2024-07-22 13:08:57.823579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.065 [2024-07-22 13:08:57.823594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:2880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:53.065 [2024-07-22 13:08:57.823607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.065 [2024-07-22 13:08:57.823621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:2888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.065 [2024-07-22 13:08:57.823634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.065 [2024-07-22 13:08:57.823648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:53.065 [2024-07-22 13:08:57.823661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.065 [2024-07-22 13:08:57.823676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:2904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:53.065 [2024-07-22 13:08:57.823689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.065 [2024-07-22 13:08:57.823703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:2272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.065 [2024-07-22 13:08:57.823716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.065 [2024-07-22 13:08:57.823731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:2288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.065 [2024-07-22 13:08:57.823744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.065 [2024-07-22 13:08:57.823758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:2312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.065 [2024-07-22 13:08:57.823771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.065 [2024-07-22 13:08:57.823792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:2328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.065 [2024-07-22 13:08:57.823805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.065 [2024-07-22 13:08:57.823820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:2336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.065 [2024-07-22 13:08:57.823833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.065 [2024-07-22 13:08:57.823864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:2344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.065 [2024-07-22 13:08:57.823877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.065 [2024-07-22 13:08:57.823892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:2352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.065 [2024-07-22 13:08:57.823905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.065 [2024-07-22 13:08:57.823920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:2360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.065 [2024-07-22 13:08:57.823934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.065 [2024-07-22 13:08:57.823948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:2912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.065 [2024-07-22 13:08:57.823961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.065 [2024-07-22 13:08:57.823976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:53.065 [2024-07-22 13:08:57.823989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.065 [2024-07-22 13:08:57.824004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:2928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.065 [2024-07-22 13:08:57.824017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.065 [2024-07-22 13:08:57.824032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:53.065 [2024-07-22 13:08:57.824050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.065 [2024-07-22 13:08:57.824065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:2944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:53.065 [2024-07-22 13:08:57.824078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.065 [2024-07-22 13:08:57.824093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:2952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.065 [2024-07-22 13:08:57.824106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.065 [2024-07-22 13:08:57.824121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:2960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:53.065 [2024-07-22 13:08:57.824134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.065 [2024-07-22 13:08:57.824149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:2968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:53.065 [2024-07-22 13:08:57.824180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.065 [2024-07-22 13:08:57.824197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:2976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.066 [2024-07-22 13:08:57.824211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.066 [2024-07-22 13:08:57.824226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:2984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:53.066 [2024-07-22 13:08:57.824240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.066 [2024-07-22 13:08:57.824255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:2992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.066 [2024-07-22 13:08:57.824268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.066 [2024-07-22 13:08:57.824283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:3000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.066 [2024-07-22 13:08:57.824301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.066 [2024-07-22 13:08:57.824316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:3008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:53.066 [2024-07-22 13:08:57.824329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.066 [2024-07-22 13:08:57.824344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:53.066 [2024-07-22 13:08:57.824357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.066 [2024-07-22 13:08:57.824372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:3024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.066 [2024-07-22 13:08:57.824385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.066 [2024-07-22 13:08:57.824400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:3032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:53.066 [2024-07-22 13:08:57.824414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.066 [2024-07-22 13:08:57.824429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:2368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.066 [2024-07-22 13:08:57.824442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.066 [2024-07-22 13:08:57.824457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:2384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.066 [2024-07-22 13:08:57.824470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.066 [2024-07-22 13:08:57.824485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:2392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.066 [2024-07-22 13:08:57.824498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.066 [2024-07-22 13:08:57.824513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:2400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.066 [2024-07-22 13:08:57.824531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.066 [2024-07-22 13:08:57.824554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:2432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.066 [2024-07-22 13:08:57.824568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.066 [2024-07-22 13:08:57.824583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:2440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.066 [2024-07-22 13:08:57.824597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.066 [2024-07-22 13:08:57.824612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:2480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.066 [2024-07-22 13:08:57.824625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.066 [2024-07-22 13:08:57.824640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:2496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.066 [2024-07-22 13:08:57.824653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.066 [2024-07-22 13:08:57.824668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.066 [2024-07-22 13:08:57.824681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.066 [2024-07-22 13:08:57.824696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:3048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.066 [2024-07-22 13:08:57.824710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.066 [2024-07-22 13:08:57.824724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:3056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:53.066 [2024-07-22 13:08:57.824738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.066 [2024-07-22 13:08:57.824753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:3064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:53.066 [2024-07-22 13:08:57.824766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.066 [2024-07-22 13:08:57.824781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:2504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.066 [2024-07-22 13:08:57.824795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.066 [2024-07-22 13:08:57.824810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:2512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.066 [2024-07-22 13:08:57.824824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.066 [2024-07-22 13:08:57.824839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:2520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.066 [2024-07-22 13:08:57.824852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.066 [2024-07-22 13:08:57.824867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:2552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.066 [2024-07-22 13:08:57.824880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.066 [2024-07-22 13:08:57.824895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.066 [2024-07-22 13:08:57.824909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.066 [2024-07-22 13:08:57.824930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:2592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.066 [2024-07-22 13:08:57.824944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.066 [2024-07-22 13:08:57.824959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:2600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.066 [2024-07-22 13:08:57.824972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.066 [2024-07-22 13:08:57.824986] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61810 is same with the state(5) to be set 00:46:53.066 [2024-07-22 13:08:57.825023] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:46:53.066 [2024-07-22 13:08:57.825034] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:46:53.066 [2024-07-22 13:08:57.825044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2616 len:8 PRP1 0x0 PRP2 0x0 00:46:53.066 [2024-07-22 13:08:57.825057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.066 [2024-07-22 13:08:57.825112] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xa61810 was disconnected and freed. reset controller. 00:46:53.066 [2024-07-22 13:08:57.825160] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:46:53.066 [2024-07-22 13:08:57.825244] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:46:53.066 [2024-07-22 13:08:57.825264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.066 [2024-07-22 13:08:57.825279] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:46:53.066 [2024-07-22 13:08:57.825292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.066 [2024-07-22 13:08:57.825306] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:46:53.066 [2024-07-22 13:08:57.825320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.066 [2024-07-22 13:08:57.825334] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:46:53.066 [2024-07-22 13:08:57.825347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.066 [2024-07-22 13:08:57.825361] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:53.066 [2024-07-22 13:08:57.827912] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:53.066 [2024-07-22 13:08:57.827953] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa37ea0 (9): Bad file descriptor 00:46:53.066 [2024-07-22 13:08:57.859643] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:46:53.066 [2024-07-22 13:09:01.352953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:32656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.066 [2024-07-22 13:09:01.352999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.066 [2024-07-22 13:09:01.353023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:32664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.066 [2024-07-22 13:09:01.353038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.066 [2024-07-22 13:09:01.353071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:32696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.066 [2024-07-22 13:09:01.353084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.066 [2024-07-22 13:09:01.353097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:32712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.067 [2024-07-22 13:09:01.353109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.067 [2024-07-22 13:09:01.353122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:32720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.067 [2024-07-22 13:09:01.353134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.067 [2024-07-22 13:09:01.353147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:31960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.067 [2024-07-22 13:09:01.353176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.067 [2024-07-22 13:09:01.353190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:31976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.067 [2024-07-22 13:09:01.353216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.067 [2024-07-22 13:09:01.353232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:31984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.067 [2024-07-22 13:09:01.353245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.067 [2024-07-22 13:09:01.353260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:31992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.067 [2024-07-22 13:09:01.353272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.067 [2024-07-22 13:09:01.353287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:32048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.067 [2024-07-22 13:09:01.353299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.067 [2024-07-22 13:09:01.353314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:32080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.067 [2024-07-22 13:09:01.353326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.067 [2024-07-22 13:09:01.353340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:32104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.067 [2024-07-22 13:09:01.353353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.067 [2024-07-22 13:09:01.353367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:32112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.067 [2024-07-22 13:09:01.353380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.067 [2024-07-22 13:09:01.353394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:32160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.067 [2024-07-22 13:09:01.353407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.067 [2024-07-22 13:09:01.353421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:32184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.067 [2024-07-22 13:09:01.353442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.067 [2024-07-22 13:09:01.353457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:32192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.067 [2024-07-22 13:09:01.353470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.067 [2024-07-22 13:09:01.353484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:32200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.067 [2024-07-22 13:09:01.353498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.067 [2024-07-22 13:09:01.353543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:32216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.067 [2024-07-22 13:09:01.353556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.067 [2024-07-22 13:09:01.353586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:32224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.067 [2024-07-22 13:09:01.353598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.067 [2024-07-22 13:09:01.353612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:32240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.067 [2024-07-22 13:09:01.353625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.067 [2024-07-22 13:09:01.353639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:32248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.067 [2024-07-22 13:09:01.353651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.067 [2024-07-22 13:09:01.353665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:32736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.067 [2024-07-22 13:09:01.353678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.067 [2024-07-22 13:09:01.353692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.067 [2024-07-22 13:09:01.353704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.067 [2024-07-22 13:09:01.353718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:32752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.067 [2024-07-22 13:09:01.353730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.067 [2024-07-22 13:09:01.353744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:32760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.067 [2024-07-22 13:09:01.353757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.067 [2024-07-22 13:09:01.353771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:32776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.067 [2024-07-22 13:09:01.353784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.067 [2024-07-22 13:09:01.353797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:32288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.067 [2024-07-22 13:09:01.353810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.067 [2024-07-22 13:09:01.353834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:32312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.067 [2024-07-22 13:09:01.353847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.067 [2024-07-22 13:09:01.353860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:32328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.067 [2024-07-22 13:09:01.353873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.067 [2024-07-22 13:09:01.353887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:32344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.067 [2024-07-22 13:09:01.353899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.067 [2024-07-22 13:09:01.353913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:32352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.067 [2024-07-22 13:09:01.353941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.067 [2024-07-22 13:09:01.353955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:32368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.067 [2024-07-22 13:09:01.353983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.067 [2024-07-22 13:09:01.353997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:32376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.067 [2024-07-22 13:09:01.354015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.067 [2024-07-22 13:09:01.354030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:32384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.067 [2024-07-22 13:09:01.354043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.067 [2024-07-22 13:09:01.354057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:32784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.067 [2024-07-22 13:09:01.354070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.067 [2024-07-22 13:09:01.354084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:32816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.067 [2024-07-22 13:09:01.354096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.068 [2024-07-22 13:09:01.354110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:32824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.068 [2024-07-22 13:09:01.354122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.068 [2024-07-22 13:09:01.354136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:32840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.068 [2024-07-22 13:09:01.354164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.068 [2024-07-22 13:09:01.354195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.068 [2024-07-22 13:09:01.354208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.068 [2024-07-22 13:09:01.354223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:32872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.068 [2024-07-22 13:09:01.354254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.068 [2024-07-22 13:09:01.354273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:32888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:53.068 [2024-07-22 13:09:01.354286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.068 [2024-07-22 13:09:01.354301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:32896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:53.068 [2024-07-22 13:09:01.354315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.068 [2024-07-22 13:09:01.354330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.068 [2024-07-22 13:09:01.354343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.068 [2024-07-22 13:09:01.354358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:53.068 [2024-07-22 13:09:01.354371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.068 [2024-07-22 13:09:01.354386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:32920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:53.068 [2024-07-22 13:09:01.354399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.068 [2024-07-22 13:09:01.354429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:32928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.068 [2024-07-22 13:09:01.354442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.068 [2024-07-22 13:09:01.354456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:32936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:53.068 [2024-07-22 13:09:01.354469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.068 [2024-07-22 13:09:01.354483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:32944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.068 [2024-07-22 13:09:01.354496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.068 [2024-07-22 13:09:01.354511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:32952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.068 [2024-07-22 13:09:01.354529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.068 [2024-07-22 13:09:01.354555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.068 [2024-07-22 13:09:01.354608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.068 [2024-07-22 13:09:01.354623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:32968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:53.068 [2024-07-22 13:09:01.354635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.068 [2024-07-22 13:09:01.354650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:32976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.068 [2024-07-22 13:09:01.354663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.068 [2024-07-22 13:09:01.354677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:32984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.068 [2024-07-22 13:09:01.354699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.068 [2024-07-22 13:09:01.354714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:32992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.068 [2024-07-22 13:09:01.354727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.068 [2024-07-22 13:09:01.354741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:33000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.068 [2024-07-22 13:09:01.354753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.068 [2024-07-22 13:09:01.354767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:33008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:53.068 [2024-07-22 13:09:01.354779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.068 [2024-07-22 13:09:01.354793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:33016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:53.068 [2024-07-22 13:09:01.354805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.068 [2024-07-22 13:09:01.354819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:33024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.068 [2024-07-22 13:09:01.354832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.068 [2024-07-22 13:09:01.354845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:33032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:53.068 [2024-07-22 13:09:01.354858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.068 [2024-07-22 13:09:01.354872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:33040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:53.068 [2024-07-22 13:09:01.354884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.068 [2024-07-22 13:09:01.354912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.068 [2024-07-22 13:09:01.354924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.068 [2024-07-22 13:09:01.354938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:32392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.068 [2024-07-22 13:09:01.354950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.068 [2024-07-22 13:09:01.354963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:32400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.068 [2024-07-22 13:09:01.354975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.068 [2024-07-22 13:09:01.354989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:32408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.068 [2024-07-22 13:09:01.355001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.068 [2024-07-22 13:09:01.355029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:32416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.068 [2024-07-22 13:09:01.355047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.068 [2024-07-22 13:09:01.355066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:32432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.068 [2024-07-22 13:09:01.355079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.068 [2024-07-22 13:09:01.355092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:32464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.068 [2024-07-22 13:09:01.355104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.068 [2024-07-22 13:09:01.355117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:32472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.068 [2024-07-22 13:09:01.355129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.068 [2024-07-22 13:09:01.355142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:32488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.068 [2024-07-22 13:09:01.355154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.068 [2024-07-22 13:09:01.355167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:33056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.068 [2024-07-22 13:09:01.355187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.068 [2024-07-22 13:09:01.355203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:33064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.068 [2024-07-22 13:09:01.355216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.068 [2024-07-22 13:09:01.355229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:33072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:53.068 [2024-07-22 13:09:01.355241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.068 [2024-07-22 13:09:01.355254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:33080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:53.068 [2024-07-22 13:09:01.355266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.068 [2024-07-22 13:09:01.355279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:33088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.068 [2024-07-22 13:09:01.355291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.068 [2024-07-22 13:09:01.355304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:33096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:53.068 [2024-07-22 13:09:01.355316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.068 [2024-07-22 13:09:01.355329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:33104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:53.069 [2024-07-22 13:09:01.355341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.069 [2024-07-22 13:09:01.355354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:33112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.069 [2024-07-22 13:09:01.355366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.069 [2024-07-22 13:09:01.355379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:33120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:53.069 [2024-07-22 13:09:01.355398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.069 [2024-07-22 13:09:01.355413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:33128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:53.069 [2024-07-22 13:09:01.355424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.069 [2024-07-22 13:09:01.355437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:33136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:53.069 [2024-07-22 13:09:01.355450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.069 [2024-07-22 13:09:01.355463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:53.069 [2024-07-22 13:09:01.355482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.069 [2024-07-22 13:09:01.355495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:33152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:53.069 [2024-07-22 13:09:01.355507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.069 [2024-07-22 13:09:01.355521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:33160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.069 [2024-07-22 13:09:01.355533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.069 [2024-07-22 13:09:01.355546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:33168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:53.069 [2024-07-22 13:09:01.355557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.069 [2024-07-22 13:09:01.355570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:33176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:53.069 [2024-07-22 13:09:01.355582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.069 [2024-07-22 13:09:01.355596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:33184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:53.069 [2024-07-22 13:09:01.355607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.069 [2024-07-22 13:09:01.355620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:33192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.069 [2024-07-22 13:09:01.355632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.069 [2024-07-22 13:09:01.355645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:33200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.069 [2024-07-22 13:09:01.355657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.069 [2024-07-22 13:09:01.355670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:33208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:53.069 [2024-07-22 13:09:01.355681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.069 [2024-07-22 13:09:01.355694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:33216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:53.069 [2024-07-22 13:09:01.355706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.069 [2024-07-22 13:09:01.355725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:32496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.069 [2024-07-22 13:09:01.355738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.069 [2024-07-22 13:09:01.355751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:32512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.069 [2024-07-22 13:09:01.355763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.069 [2024-07-22 13:09:01.355776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.069 [2024-07-22 13:09:01.355787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.069 [2024-07-22 13:09:01.355801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:32528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.069 [2024-07-22 13:09:01.355812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.069 [2024-07-22 13:09:01.355825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.069 [2024-07-22 13:09:01.355837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.069 [2024-07-22 13:09:01.355851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:32576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.069 [2024-07-22 13:09:01.355863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.069 [2024-07-22 13:09:01.355876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:32584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.069 [2024-07-22 13:09:01.355892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.069 [2024-07-22 13:09:01.355906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:32600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.069 [2024-07-22 13:09:01.355917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.069 [2024-07-22 13:09:01.355931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:33224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:53.069 [2024-07-22 13:09:01.355943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.069 [2024-07-22 13:09:01.355956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:33232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.069 [2024-07-22 13:09:01.355968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.069 [2024-07-22 13:09:01.355981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:33240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.069 [2024-07-22 13:09:01.355992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.069 [2024-07-22 13:09:01.356006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:33248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:53.069 [2024-07-22 13:09:01.356017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.069 [2024-07-22 13:09:01.356031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:33256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:53.069 [2024-07-22 13:09:01.356042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.069 [2024-07-22 13:09:01.356061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:32632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.069 [2024-07-22 13:09:01.356073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.069 [2024-07-22 13:09:01.356087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:32640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.069 [2024-07-22 13:09:01.356098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.069 [2024-07-22 13:09:01.356111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:32648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.069 [2024-07-22 13:09:01.356123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.069 [2024-07-22 13:09:01.356145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:32672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.069 [2024-07-22 13:09:01.356159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.069 [2024-07-22 13:09:01.356172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:32680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.069 [2024-07-22 13:09:01.356184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.069 [2024-07-22 13:09:01.356197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:32688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.069 [2024-07-22 13:09:01.356208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.069 [2024-07-22 13:09:01.356222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:32704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.069 [2024-07-22 13:09:01.356234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.069 [2024-07-22 13:09:01.356247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:32728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.069 [2024-07-22 13:09:01.356258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.069 [2024-07-22 13:09:01.356272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:33264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.069 [2024-07-22 13:09:01.356284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.069 [2024-07-22 13:09:01.356297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:33272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:53.069 [2024-07-22 13:09:01.356313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.069 [2024-07-22 13:09:01.356326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:33280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:53.069 [2024-07-22 13:09:01.356338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.069 [2024-07-22 13:09:01.356351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:33288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:53.069 [2024-07-22 13:09:01.356364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.069 [2024-07-22 13:09:01.356377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:33296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:53.069 [2024-07-22 13:09:01.356395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.069 [2024-07-22 13:09:01.356409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:33304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:53.069 [2024-07-22 13:09:01.356421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.069 [2024-07-22 13:09:01.356435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:33312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:53.069 [2024-07-22 13:09:01.356446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.070 [2024-07-22 13:09:01.356459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:33320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:53.070 [2024-07-22 13:09:01.356471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.070 [2024-07-22 13:09:01.356484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:33328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.070 [2024-07-22 13:09:01.356496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.070 [2024-07-22 13:09:01.356509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:32768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.070 [2024-07-22 13:09:01.356521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.070 [2024-07-22 13:09:01.356534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:32792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.070 [2024-07-22 13:09:01.356551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.070 [2024-07-22 13:09:01.356565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:32800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.070 [2024-07-22 13:09:01.356576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.070 [2024-07-22 13:09:01.356589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:32808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.070 [2024-07-22 13:09:01.356601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.070 [2024-07-22 13:09:01.356614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:32832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.070 [2024-07-22 13:09:01.356627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.070 [2024-07-22 13:09:01.356640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:32856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.070 [2024-07-22 13:09:01.356651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.070 [2024-07-22 13:09:01.356665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:32864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.070 [2024-07-22 13:09:01.356676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.070 [2024-07-22 13:09:01.356689] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4fed0 is same with the state(5) to be set 00:46:53.070 [2024-07-22 13:09:01.356703] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:46:53.070 [2024-07-22 13:09:01.356718] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:46:53.070 [2024-07-22 13:09:01.356733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32880 len:8 PRP1 0x0 PRP2 0x0 00:46:53.070 [2024-07-22 13:09:01.356745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.070 [2024-07-22 13:09:01.356798] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xa4fed0 was disconnected and freed. reset controller. 00:46:53.070 [2024-07-22 13:09:01.356814] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:46:53.070 [2024-07-22 13:09:01.356862] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:46:53.070 [2024-07-22 13:09:01.356886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.070 [2024-07-22 13:09:01.356899] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:46:53.070 [2024-07-22 13:09:01.356926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.070 [2024-07-22 13:09:01.356939] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:46:53.070 [2024-07-22 13:09:01.356950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.070 [2024-07-22 13:09:01.356963] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:46:53.070 [2024-07-22 13:09:01.356974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.070 [2024-07-22 13:09:01.356986] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:53.070 [2024-07-22 13:09:01.357016] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa37ea0 (9): Bad file descriptor 00:46:53.070 [2024-07-22 13:09:01.359449] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:53.070 [2024-07-22 13:09:01.394646] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:46:53.070 [2024-07-22 13:09:05.879756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:49952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.070 [2024-07-22 13:09:05.879810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.070 [2024-07-22 13:09:05.879836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:49960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.070 [2024-07-22 13:09:05.879852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.070 [2024-07-22 13:09:05.879867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:49968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.070 [2024-07-22 13:09:05.879880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.070 [2024-07-22 13:09:05.879894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:49992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.070 [2024-07-22 13:09:05.879907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.070 [2024-07-22 13:09:05.879936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:50008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.070 [2024-07-22 13:09:05.879949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.070 [2024-07-22 13:09:05.880004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:50016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.070 [2024-07-22 13:09:05.880018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.070 [2024-07-22 13:09:05.880032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:50032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.070 [2024-07-22 13:09:05.880045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.070 [2024-07-22 13:09:05.880059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:50040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.070 [2024-07-22 13:09:05.880072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.070 [2024-07-22 13:09:05.880087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:49416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.070 [2024-07-22 13:09:05.880099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.070 [2024-07-22 13:09:05.880113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:49424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.070 [2024-07-22 13:09:05.880126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.070 [2024-07-22 13:09:05.880140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:49448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.070 [2024-07-22 13:09:05.880153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.070 [2024-07-22 13:09:05.880167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:49456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.070 [2024-07-22 13:09:05.880179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.070 [2024-07-22 13:09:05.880194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:49464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.070 [2024-07-22 13:09:05.880206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.070 [2024-07-22 13:09:05.880238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:49480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.070 [2024-07-22 13:09:05.880251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.070 [2024-07-22 13:09:05.880265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:49488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.070 [2024-07-22 13:09:05.880278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.070 [2024-07-22 13:09:05.880292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:49520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.070 [2024-07-22 13:09:05.880305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.070 [2024-07-22 13:09:05.880319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:50056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.070 [2024-07-22 13:09:05.880334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.070 [2024-07-22 13:09:05.880349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:50080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.070 [2024-07-22 13:09:05.880371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.070 [2024-07-22 13:09:05.880387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:50096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.070 [2024-07-22 13:09:05.880400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.070 [2024-07-22 13:09:05.880415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:50104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.070 [2024-07-22 13:09:05.880428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.070 [2024-07-22 13:09:05.880442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:49528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.070 [2024-07-22 13:09:05.880455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.070 [2024-07-22 13:09:05.880470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:49536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.070 [2024-07-22 13:09:05.880483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.070 [2024-07-22 13:09:05.880498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:49544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.070 [2024-07-22 13:09:05.880510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.070 [2024-07-22 13:09:05.880525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:49576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.070 [2024-07-22 13:09:05.880538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.070 [2024-07-22 13:09:05.880553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:49592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.070 [2024-07-22 13:09:05.880566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.071 [2024-07-22 13:09:05.880580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:49616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.071 [2024-07-22 13:09:05.880593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.071 [2024-07-22 13:09:05.880607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:49640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.071 [2024-07-22 13:09:05.880621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.071 [2024-07-22 13:09:05.880635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:49648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.071 [2024-07-22 13:09:05.880648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.071 [2024-07-22 13:09:05.880677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:50128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.071 [2024-07-22 13:09:05.880689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.071 [2024-07-22 13:09:05.880703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:50160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.071 [2024-07-22 13:09:05.880716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.071 [2024-07-22 13:09:05.880730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:49680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.071 [2024-07-22 13:09:05.880748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.071 [2024-07-22 13:09:05.880763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:49712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.071 [2024-07-22 13:09:05.880776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.071 [2024-07-22 13:09:05.880791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:49720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.071 [2024-07-22 13:09:05.880819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.071 [2024-07-22 13:09:05.880833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:49752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.071 [2024-07-22 13:09:05.880845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.071 [2024-07-22 13:09:05.880859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:49768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.071 [2024-07-22 13:09:05.880886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.071 [2024-07-22 13:09:05.880900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:49776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.071 [2024-07-22 13:09:05.880912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.071 [2024-07-22 13:09:05.880925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:49784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.071 [2024-07-22 13:09:05.880936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.071 [2024-07-22 13:09:05.880950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:49800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.071 [2024-07-22 13:09:05.880962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.071 [2024-07-22 13:09:05.880975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:50176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.071 [2024-07-22 13:09:05.880987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.071 [2024-07-22 13:09:05.881000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:50192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:53.071 [2024-07-22 13:09:05.881012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.071 [2024-07-22 13:09:05.881025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:50200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.071 [2024-07-22 13:09:05.881037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.071 [2024-07-22 13:09:05.881050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:50208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.071 [2024-07-22 13:09:05.881062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.071 [2024-07-22 13:09:05.881076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:50216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.071 [2024-07-22 13:09:05.881087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.071 [2024-07-22 13:09:05.881133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:50224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.071 [2024-07-22 13:09:05.881147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.071 [2024-07-22 13:09:05.881161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:50232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.071 [2024-07-22 13:09:05.881189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.071 [2024-07-22 13:09:05.881214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:50240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.071 [2024-07-22 13:09:05.881228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.071 [2024-07-22 13:09:05.881242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:50248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.071 [2024-07-22 13:09:05.881255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.071 [2024-07-22 13:09:05.881269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:53.071 [2024-07-22 13:09:05.881281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.071 [2024-07-22 13:09:05.881294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:50264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:53.071 [2024-07-22 13:09:05.881307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.071 [2024-07-22 13:09:05.881320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:50272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:53.071 [2024-07-22 13:09:05.881333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.071 [2024-07-22 13:09:05.881347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:50280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.071 [2024-07-22 13:09:05.881359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.071 [2024-07-22 13:09:05.881373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:50288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:53.071 [2024-07-22 13:09:05.881385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.071 [2024-07-22 13:09:05.881399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:50296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.071 [2024-07-22 13:09:05.881411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.071 [2024-07-22 13:09:05.881424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:50304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:53.071 [2024-07-22 13:09:05.881437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.071 [2024-07-22 13:09:05.881450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:50312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:53.071 [2024-07-22 13:09:05.881462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.071 [2024-07-22 13:09:05.881476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:50320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:53.071 [2024-07-22 13:09:05.881495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.071 [2024-07-22 13:09:05.881524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:50328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.071 [2024-07-22 13:09:05.881537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.071 [2024-07-22 13:09:05.881550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:50336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:53.071 [2024-07-22 13:09:05.881563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.071 [2024-07-22 13:09:05.881576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:50344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:53.071 [2024-07-22 13:09:05.881589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.071 [2024-07-22 13:09:05.881602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:50352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.071 [2024-07-22 13:09:05.881614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.071 [2024-07-22 13:09:05.881627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:50360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:53.071 [2024-07-22 13:09:05.881639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.071 [2024-07-22 13:09:05.881653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:50368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:53.071 [2024-07-22 13:09:05.881664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.071 [2024-07-22 13:09:05.881677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:50376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.071 [2024-07-22 13:09:05.881689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.071 [2024-07-22 13:09:05.881703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:49808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.071 [2024-07-22 13:09:05.881714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.071 [2024-07-22 13:09:05.881728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:49816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.071 [2024-07-22 13:09:05.881739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.071 [2024-07-22 13:09:05.881753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:49824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.071 [2024-07-22 13:09:05.881764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.071 [2024-07-22 13:09:05.881778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:49840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.071 [2024-07-22 13:09:05.881790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.071 [2024-07-22 13:09:05.881803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:49848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.072 [2024-07-22 13:09:05.881814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.072 [2024-07-22 13:09:05.881834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:49856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.072 [2024-07-22 13:09:05.881846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.072 [2024-07-22 13:09:05.881860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:49904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.072 [2024-07-22 13:09:05.881871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.072 [2024-07-22 13:09:05.881884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:49928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.072 [2024-07-22 13:09:05.881896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.072 [2024-07-22 13:09:05.881909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:50384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.072 [2024-07-22 13:09:05.881921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.072 [2024-07-22 13:09:05.881934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:50392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:53.072 [2024-07-22 13:09:05.881947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.072 [2024-07-22 13:09:05.881961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:50400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.072 [2024-07-22 13:09:05.881973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.072 [2024-07-22 13:09:05.881987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:50408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.072 [2024-07-22 13:09:05.881999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.072 [2024-07-22 13:09:05.882012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:50416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.072 [2024-07-22 13:09:05.882024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.072 [2024-07-22 13:09:05.882037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:50424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:53.072 [2024-07-22 13:09:05.882049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.072 [2024-07-22 13:09:05.882062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:50432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:53.072 [2024-07-22 13:09:05.882074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.072 [2024-07-22 13:09:05.882087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:50440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.072 [2024-07-22 13:09:05.882099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.072 [2024-07-22 13:09:05.882112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:49944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.072 [2024-07-22 13:09:05.882124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.072 [2024-07-22 13:09:05.882137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:49976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.072 [2024-07-22 13:09:05.882162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.072 [2024-07-22 13:09:05.882178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:49984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.072 [2024-07-22 13:09:05.882190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.072 [2024-07-22 13:09:05.882204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:50000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.072 [2024-07-22 13:09:05.882215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.072 [2024-07-22 13:09:05.882228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:50024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.072 [2024-07-22 13:09:05.882240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.072 [2024-07-22 13:09:05.882253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:50048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.072 [2024-07-22 13:09:05.882265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.072 [2024-07-22 13:09:05.882278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:50064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.072 [2024-07-22 13:09:05.882290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.072 [2024-07-22 13:09:05.882303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:50072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.072 [2024-07-22 13:09:05.882314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.072 [2024-07-22 13:09:05.882327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:50448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:53.072 [2024-07-22 13:09:05.882339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.072 [2024-07-22 13:09:05.882353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:50456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.072 [2024-07-22 13:09:05.882365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.072 [2024-07-22 13:09:05.882378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:50464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.072 [2024-07-22 13:09:05.882390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.072 [2024-07-22 13:09:05.882403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:50472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:53.072 [2024-07-22 13:09:05.882415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.072 [2024-07-22 13:09:05.882428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:50480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:53.072 [2024-07-22 13:09:05.882440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.072 [2024-07-22 13:09:05.882453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:50488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:53.072 [2024-07-22 13:09:05.882465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.072 [2024-07-22 13:09:05.882478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:50496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:53.072 [2024-07-22 13:09:05.882496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.072 [2024-07-22 13:09:05.882510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:50504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.072 [2024-07-22 13:09:05.882522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.072 [2024-07-22 13:09:05.882536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:50512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:53.072 [2024-07-22 13:09:05.882548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.072 [2024-07-22 13:09:05.882561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:50520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.072 [2024-07-22 13:09:05.882573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.072 [2024-07-22 13:09:05.882614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:50528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.072 [2024-07-22 13:09:05.882627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.072 [2024-07-22 13:09:05.882641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:50536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:53.072 [2024-07-22 13:09:05.882653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.072 [2024-07-22 13:09:05.882668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:50544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:53.072 [2024-07-22 13:09:05.882680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.072 [2024-07-22 13:09:05.882720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:50552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.072 [2024-07-22 13:09:05.882733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.072 [2024-07-22 13:09:05.882748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:50560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.072 [2024-07-22 13:09:05.882760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.072 [2024-07-22 13:09:05.882775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:50568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.072 [2024-07-22 13:09:05.882788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.072 [2024-07-22 13:09:05.882802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:50576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.072 [2024-07-22 13:09:05.882840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.072 [2024-07-22 13:09:05.882856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:50584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.072 [2024-07-22 13:09:05.882870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.072 [2024-07-22 13:09:05.882885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:50592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:53.073 [2024-07-22 13:09:05.882910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.073 [2024-07-22 13:09:05.882931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:50600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.073 [2024-07-22 13:09:05.882944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.073 [2024-07-22 13:09:05.882959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:50608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.073 [2024-07-22 13:09:05.882973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.073 [2024-07-22 13:09:05.882987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:50616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:53.073 [2024-07-22 13:09:05.883000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.073 [2024-07-22 13:09:05.883014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:50624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.073 [2024-07-22 13:09:05.883042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.073 [2024-07-22 13:09:05.883056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:50632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.073 [2024-07-22 13:09:05.883069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.073 [2024-07-22 13:09:05.883083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:50640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.073 [2024-07-22 13:09:05.883095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.073 [2024-07-22 13:09:05.883109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:50648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.073 [2024-07-22 13:09:05.883121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.073 [2024-07-22 13:09:05.883136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:50656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.073 [2024-07-22 13:09:05.883174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.073 [2024-07-22 13:09:05.883188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:50664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:53.073 [2024-07-22 13:09:05.883200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.073 [2024-07-22 13:09:05.883216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:50672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.073 [2024-07-22 13:09:05.883228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.073 [2024-07-22 13:09:05.883252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:50680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.073 [2024-07-22 13:09:05.883265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.073 [2024-07-22 13:09:05.883279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:50688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:53.073 [2024-07-22 13:09:05.883291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.073 [2024-07-22 13:09:05.883306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:50696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.073 [2024-07-22 13:09:05.883325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.073 [2024-07-22 13:09:05.883339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:50704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:53.073 [2024-07-22 13:09:05.883357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.073 [2024-07-22 13:09:05.883371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:50088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.073 [2024-07-22 13:09:05.883384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.073 [2024-07-22 13:09:05.883398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:50112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.073 [2024-07-22 13:09:05.883411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.073 [2024-07-22 13:09:05.883424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:50120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.073 [2024-07-22 13:09:05.883436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.073 [2024-07-22 13:09:05.883451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:50136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.073 [2024-07-22 13:09:05.883463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.073 [2024-07-22 13:09:05.883477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:50144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.073 [2024-07-22 13:09:05.883489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.073 [2024-07-22 13:09:05.883503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:50152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.073 [2024-07-22 13:09:05.883515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.073 [2024-07-22 13:09:05.883529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:50168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:53.073 [2024-07-22 13:09:05.883541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.073 [2024-07-22 13:09:05.883570] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0bac0 is same with the state(5) to be set 00:46:53.073 [2024-07-22 13:09:05.883584] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:46:53.073 [2024-07-22 13:09:05.883594] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:46:53.073 [2024-07-22 13:09:05.883603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:50184 len:8 PRP1 0x0 PRP2 0x0 00:46:53.073 [2024-07-22 13:09:05.883615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.073 [2024-07-22 13:09:05.883668] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xc0bac0 was disconnected and freed. reset controller. 00:46:53.073 [2024-07-22 13:09:05.883684] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:46:53.073 [2024-07-22 13:09:05.883733] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:46:53.073 [2024-07-22 13:09:05.883752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.073 [2024-07-22 13:09:05.883774] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:46:53.073 [2024-07-22 13:09:05.883787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.073 [2024-07-22 13:09:05.883800] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:46:53.073 [2024-07-22 13:09:05.883812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.073 [2024-07-22 13:09:05.883824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:46:53.073 [2024-07-22 13:09:05.883836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:53.073 [2024-07-22 13:09:05.883848] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:53.073 [2024-07-22 13:09:05.883878] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa37ea0 (9): Bad file descriptor 00:46:53.073 [2024-07-22 13:09:05.886248] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:53.073 [2024-07-22 13:09:05.916433] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:46:53.073 00:46:53.073 Latency(us) 00:46:53.073 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:53.073 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:46:53.073 Verification LBA range: start 0x0 length 0x4000 00:46:53.073 NVMe0n1 : 15.00 14674.84 57.32 330.90 0.00 8513.85 528.76 15728.64 00:46:53.073 =================================================================================================================== 00:46:53.073 Total : 14674.84 57.32 330.90 0.00 8513.85 528.76 15728.64 00:46:53.073 Received shutdown signal, test time was about 15.000000 seconds 00:46:53.073 00:46:53.073 Latency(us) 00:46:53.073 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:53.073 =================================================================================================================== 00:46:53.073 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:46:53.073 13:09:11 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:46:53.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:46:53.073 13:09:11 -- host/failover.sh@65 -- # count=3 00:46:53.073 13:09:11 -- host/failover.sh@67 -- # (( count != 3 )) 00:46:53.073 13:09:11 -- host/failover.sh@73 -- # bdevperf_pid=95016 00:46:53.073 13:09:11 -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:46:53.073 13:09:11 -- host/failover.sh@75 -- # waitforlisten 95016 /var/tmp/bdevperf.sock 00:46:53.073 13:09:11 -- common/autotest_common.sh@819 -- # '[' -z 95016 ']' 00:46:53.073 13:09:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:46:53.073 13:09:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:46:53.073 13:09:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:46:53.073 13:09:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:46:53.073 13:09:11 -- common/autotest_common.sh@10 -- # set +x 00:46:53.638 13:09:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:46:53.638 13:09:12 -- common/autotest_common.sh@852 -- # return 0 00:46:53.638 13:09:12 -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:46:53.896 [2024-07-22 13:09:13.220599] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:46:53.896 13:09:13 -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:46:54.153 [2024-07-22 13:09:13.428750] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:46:54.153 13:09:13 -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:46:54.411 NVMe0n1 00:46:54.411 13:09:13 -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:46:54.669 00:46:54.669 13:09:13 -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:46:54.926 00:46:54.926 13:09:14 -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:46:54.926 13:09:14 -- host/failover.sh@82 -- # grep -q NVMe0 00:46:55.183 13:09:14 -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:46:55.445 13:09:14 -- host/failover.sh@87 -- # sleep 3 00:46:58.730 13:09:17 -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:46:58.730 13:09:17 -- host/failover.sh@88 -- # grep -q NVMe0 00:46:58.730 13:09:17 -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:46:58.730 13:09:17 -- host/failover.sh@90 -- # run_test_pid=95153 00:46:58.730 13:09:17 -- host/failover.sh@92 -- # wait 95153 00:46:59.662 0 00:46:59.920 13:09:19 -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:46:59.920 [2024-07-22 13:09:11.992887] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:46:59.920 [2024-07-22 13:09:11.993096] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95016 ] 00:46:59.920 [2024-07-22 13:09:12.144223] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:59.920 [2024-07-22 13:09:12.204272] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:46:59.920 [2024-07-22 13:09:14.703379] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:46:59.920 [2024-07-22 13:09:14.703479] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:46:59.920 [2024-07-22 13:09:14.703520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:59.920 [2024-07-22 13:09:14.703536] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:46:59.920 [2024-07-22 13:09:14.703549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:59.920 [2024-07-22 13:09:14.703562] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:46:59.920 [2024-07-22 13:09:14.703574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:59.920 [2024-07-22 13:09:14.703587] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:46:59.920 [2024-07-22 13:09:14.703599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:59.920 [2024-07-22 13:09:14.703612] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:59.920 [2024-07-22 13:09:14.703655] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:59.920 [2024-07-22 13:09:14.703696] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1400ea0 (9): Bad file descriptor 00:46:59.920 [2024-07-22 13:09:14.714551] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:46:59.920 Running I/O for 1 seconds... 00:46:59.920 00:46:59.920 Latency(us) 00:46:59.920 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:59.920 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:46:59.920 Verification LBA range: start 0x0 length 0x4000 00:46:59.920 NVMe0n1 : 1.01 14719.37 57.50 0.00 0.00 8654.10 1213.91 13822.14 00:46:59.920 =================================================================================================================== 00:46:59.920 Total : 14719.37 57.50 0.00 0.00 8654.10 1213.91 13822.14 00:46:59.920 13:09:19 -- host/failover.sh@95 -- # grep -q NVMe0 00:46:59.920 13:09:19 -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:47:00.178 13:09:19 -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:47:00.435 13:09:19 -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:47:00.435 13:09:19 -- host/failover.sh@99 -- # grep -q NVMe0 00:47:00.435 13:09:19 -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:47:00.692 13:09:20 -- host/failover.sh@101 -- # sleep 3 00:47:03.996 13:09:23 -- host/failover.sh@103 -- # grep -q NVMe0 00:47:03.996 13:09:23 -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:47:03.996 13:09:23 -- host/failover.sh@108 -- # killprocess 95016 00:47:03.996 13:09:23 -- common/autotest_common.sh@926 -- # '[' -z 95016 ']' 00:47:03.996 13:09:23 -- common/autotest_common.sh@930 -- # kill -0 95016 00:47:03.996 13:09:23 -- common/autotest_common.sh@931 -- # uname 00:47:03.996 13:09:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:47:03.996 13:09:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 95016 00:47:03.996 killing process with pid 95016 00:47:03.996 13:09:23 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:47:03.997 13:09:23 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:47:03.997 13:09:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 95016' 00:47:03.997 13:09:23 -- common/autotest_common.sh@945 -- # kill 95016 00:47:03.997 13:09:23 -- common/autotest_common.sh@950 -- # wait 95016 00:47:04.254 13:09:23 -- host/failover.sh@110 -- # sync 00:47:04.254 13:09:23 -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:47:04.512 13:09:23 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:47:04.512 13:09:23 -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:47:04.512 13:09:23 -- host/failover.sh@116 -- # nvmftestfini 00:47:04.512 13:09:23 -- nvmf/common.sh@476 -- # nvmfcleanup 00:47:04.512 13:09:23 -- nvmf/common.sh@116 -- # sync 00:47:04.512 13:09:23 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:47:04.512 13:09:23 -- nvmf/common.sh@119 -- # set +e 00:47:04.512 13:09:23 -- nvmf/common.sh@120 -- # for i in {1..20} 00:47:04.512 13:09:23 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:47:04.512 rmmod nvme_tcp 00:47:04.512 rmmod nvme_fabrics 00:47:04.512 rmmod nvme_keyring 00:47:04.512 13:09:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:47:04.512 13:09:23 -- nvmf/common.sh@123 -- # set -e 00:47:04.512 13:09:23 -- nvmf/common.sh@124 -- # return 0 00:47:04.512 13:09:23 -- nvmf/common.sh@477 -- # '[' -n 94649 ']' 00:47:04.512 13:09:23 -- nvmf/common.sh@478 -- # killprocess 94649 00:47:04.512 13:09:23 -- common/autotest_common.sh@926 -- # '[' -z 94649 ']' 00:47:04.512 13:09:23 -- common/autotest_common.sh@930 -- # kill -0 94649 00:47:04.512 13:09:23 -- common/autotest_common.sh@931 -- # uname 00:47:04.512 13:09:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:47:04.512 13:09:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 94649 00:47:04.512 killing process with pid 94649 00:47:04.512 13:09:23 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:47:04.512 13:09:23 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:47:04.512 13:09:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 94649' 00:47:04.512 13:09:23 -- common/autotest_common.sh@945 -- # kill 94649 00:47:04.512 13:09:23 -- common/autotest_common.sh@950 -- # wait 94649 00:47:04.770 13:09:24 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:47:04.770 13:09:24 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:47:04.770 13:09:24 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:47:04.770 13:09:24 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:47:04.770 13:09:24 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:47:04.770 13:09:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:47:04.770 13:09:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:47:04.770 13:09:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:47:04.770 13:09:24 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:47:04.770 00:47:04.770 real 0m32.213s 00:47:04.770 user 2m5.274s 00:47:04.770 sys 0m4.637s 00:47:04.770 13:09:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:47:04.770 13:09:24 -- common/autotest_common.sh@10 -- # set +x 00:47:04.770 ************************************ 00:47:04.770 END TEST nvmf_failover 00:47:04.770 ************************************ 00:47:04.770 13:09:24 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:47:04.770 13:09:24 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:47:04.770 13:09:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:47:04.770 13:09:24 -- common/autotest_common.sh@10 -- # set +x 00:47:04.770 ************************************ 00:47:04.770 START TEST nvmf_discovery 00:47:04.770 ************************************ 00:47:04.770 13:09:24 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:47:05.028 * Looking for test storage... 00:47:05.028 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:47:05.028 13:09:24 -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:47:05.028 13:09:24 -- nvmf/common.sh@7 -- # uname -s 00:47:05.028 13:09:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:47:05.028 13:09:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:47:05.028 13:09:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:47:05.028 13:09:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:47:05.028 13:09:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:47:05.028 13:09:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:47:05.028 13:09:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:47:05.028 13:09:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:47:05.028 13:09:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:47:05.028 13:09:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:47:05.028 13:09:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:864ae4a3-cd96-439b-8ef2-6dab4d992115 00:47:05.028 13:09:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=864ae4a3-cd96-439b-8ef2-6dab4d992115 00:47:05.028 13:09:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:47:05.028 13:09:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:47:05.028 13:09:24 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:47:05.028 13:09:24 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:47:05.028 13:09:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:47:05.028 13:09:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:47:05.028 13:09:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:47:05.028 13:09:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:05.028 13:09:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:05.028 13:09:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:05.028 13:09:24 -- paths/export.sh@5 -- # export PATH 00:47:05.028 13:09:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:05.028 13:09:24 -- nvmf/common.sh@46 -- # : 0 00:47:05.028 13:09:24 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:47:05.028 13:09:24 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:47:05.028 13:09:24 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:47:05.028 13:09:24 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:47:05.028 13:09:24 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:47:05.028 13:09:24 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:47:05.028 13:09:24 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:47:05.028 13:09:24 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:47:05.028 13:09:24 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:47:05.028 13:09:24 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:47:05.028 13:09:24 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:47:05.028 13:09:24 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:47:05.028 13:09:24 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:47:05.028 13:09:24 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:47:05.028 13:09:24 -- host/discovery.sh@25 -- # nvmftestinit 00:47:05.028 13:09:24 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:47:05.028 13:09:24 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:47:05.028 13:09:24 -- nvmf/common.sh@436 -- # prepare_net_devs 00:47:05.028 13:09:24 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:47:05.028 13:09:24 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:47:05.028 13:09:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:47:05.028 13:09:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:47:05.028 13:09:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:47:05.028 13:09:24 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:47:05.028 13:09:24 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:47:05.028 13:09:24 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:47:05.028 13:09:24 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:47:05.028 13:09:24 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:47:05.028 13:09:24 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:47:05.028 13:09:24 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:47:05.028 13:09:24 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:47:05.028 13:09:24 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:47:05.028 13:09:24 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:47:05.028 13:09:24 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:47:05.028 13:09:24 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:47:05.028 13:09:24 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:47:05.028 13:09:24 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:47:05.028 13:09:24 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:47:05.028 13:09:24 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:47:05.028 13:09:24 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:47:05.028 13:09:24 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:47:05.028 13:09:24 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:47:05.028 13:09:24 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:47:05.028 Cannot find device "nvmf_tgt_br" 00:47:05.028 13:09:24 -- nvmf/common.sh@154 -- # true 00:47:05.028 13:09:24 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:47:05.028 Cannot find device "nvmf_tgt_br2" 00:47:05.028 13:09:24 -- nvmf/common.sh@155 -- # true 00:47:05.028 13:09:24 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:47:05.028 13:09:24 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:47:05.028 Cannot find device "nvmf_tgt_br" 00:47:05.028 13:09:24 -- nvmf/common.sh@157 -- # true 00:47:05.028 13:09:24 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:47:05.028 Cannot find device "nvmf_tgt_br2" 00:47:05.028 13:09:24 -- nvmf/common.sh@158 -- # true 00:47:05.028 13:09:24 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:47:05.028 13:09:24 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:47:05.028 13:09:24 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:47:05.028 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:47:05.028 13:09:24 -- nvmf/common.sh@161 -- # true 00:47:05.028 13:09:24 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:47:05.028 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:47:05.028 13:09:24 -- nvmf/common.sh@162 -- # true 00:47:05.028 13:09:24 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:47:05.028 13:09:24 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:47:05.028 13:09:24 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:47:05.029 13:09:24 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:47:05.029 13:09:24 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:47:05.286 13:09:24 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:47:05.286 13:09:24 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:47:05.286 13:09:24 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:47:05.286 13:09:24 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:47:05.286 13:09:24 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:47:05.286 13:09:24 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:47:05.286 13:09:24 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:47:05.286 13:09:24 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:47:05.286 13:09:24 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:47:05.286 13:09:24 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:47:05.286 13:09:24 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:47:05.286 13:09:24 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:47:05.286 13:09:24 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:47:05.286 13:09:24 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:47:05.286 13:09:24 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:47:05.286 13:09:24 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:47:05.286 13:09:24 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:47:05.286 13:09:24 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:47:05.286 13:09:24 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:47:05.286 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:47:05.286 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:47:05.286 00:47:05.286 --- 10.0.0.2 ping statistics --- 00:47:05.286 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:05.286 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:47:05.286 13:09:24 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:47:05.286 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:47:05.286 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.033 ms 00:47:05.286 00:47:05.286 --- 10.0.0.3 ping statistics --- 00:47:05.286 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:05.286 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:47:05.286 13:09:24 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:47:05.286 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:47:05.286 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.049 ms 00:47:05.286 00:47:05.286 --- 10.0.0.1 ping statistics --- 00:47:05.286 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:05.286 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:47:05.286 13:09:24 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:47:05.286 13:09:24 -- nvmf/common.sh@421 -- # return 0 00:47:05.286 13:09:24 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:47:05.286 13:09:24 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:47:05.286 13:09:24 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:47:05.286 13:09:24 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:47:05.286 13:09:24 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:47:05.286 13:09:24 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:47:05.286 13:09:24 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:47:05.286 13:09:24 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:47:05.286 13:09:24 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:47:05.286 13:09:24 -- common/autotest_common.sh@712 -- # xtrace_disable 00:47:05.286 13:09:24 -- common/autotest_common.sh@10 -- # set +x 00:47:05.286 13:09:24 -- nvmf/common.sh@469 -- # nvmfpid=95445 00:47:05.286 13:09:24 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:47:05.286 13:09:24 -- nvmf/common.sh@470 -- # waitforlisten 95445 00:47:05.286 13:09:24 -- common/autotest_common.sh@819 -- # '[' -z 95445 ']' 00:47:05.286 13:09:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:05.286 13:09:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:47:05.286 13:09:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:05.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:05.286 13:09:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:47:05.286 13:09:24 -- common/autotest_common.sh@10 -- # set +x 00:47:05.286 [2024-07-22 13:09:24.683829] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:47:05.286 [2024-07-22 13:09:24.683908] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:47:05.544 [2024-07-22 13:09:24.823360] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:05.544 [2024-07-22 13:09:24.893017] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:47:05.544 [2024-07-22 13:09:24.893194] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:47:05.544 [2024-07-22 13:09:24.893219] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:47:05.544 [2024-07-22 13:09:24.893228] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:47:05.544 [2024-07-22 13:09:24.893253] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:47:06.476 13:09:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:47:06.476 13:09:25 -- common/autotest_common.sh@852 -- # return 0 00:47:06.476 13:09:25 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:47:06.476 13:09:25 -- common/autotest_common.sh@718 -- # xtrace_disable 00:47:06.476 13:09:25 -- common/autotest_common.sh@10 -- # set +x 00:47:06.477 13:09:25 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:47:06.477 13:09:25 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:47:06.477 13:09:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:47:06.477 13:09:25 -- common/autotest_common.sh@10 -- # set +x 00:47:06.477 [2024-07-22 13:09:25.730470] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:47:06.477 13:09:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:47:06.477 13:09:25 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:47:06.477 13:09:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:47:06.477 13:09:25 -- common/autotest_common.sh@10 -- # set +x 00:47:06.477 [2024-07-22 13:09:25.738727] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:47:06.477 13:09:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:47:06.477 13:09:25 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:47:06.477 13:09:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:47:06.477 13:09:25 -- common/autotest_common.sh@10 -- # set +x 00:47:06.477 null0 00:47:06.477 13:09:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:47:06.477 13:09:25 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:47:06.477 13:09:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:47:06.477 13:09:25 -- common/autotest_common.sh@10 -- # set +x 00:47:06.477 null1 00:47:06.477 13:09:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:47:06.477 13:09:25 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:47:06.477 13:09:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:47:06.477 13:09:25 -- common/autotest_common.sh@10 -- # set +x 00:47:06.477 13:09:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:47:06.477 13:09:25 -- host/discovery.sh@45 -- # hostpid=95496 00:47:06.477 13:09:25 -- host/discovery.sh@46 -- # waitforlisten 95496 /tmp/host.sock 00:47:06.477 13:09:25 -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:47:06.477 13:09:25 -- common/autotest_common.sh@819 -- # '[' -z 95496 ']' 00:47:06.477 13:09:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:47:06.477 13:09:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:47:06.477 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:47:06.477 13:09:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:47:06.477 13:09:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:47:06.477 13:09:25 -- common/autotest_common.sh@10 -- # set +x 00:47:06.477 [2024-07-22 13:09:25.822814] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:47:06.477 [2024-07-22 13:09:25.822903] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95496 ] 00:47:06.734 [2024-07-22 13:09:25.959077] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:06.734 [2024-07-22 13:09:26.037974] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:47:06.734 [2024-07-22 13:09:26.038163] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:47:07.667 13:09:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:47:07.667 13:09:26 -- common/autotest_common.sh@852 -- # return 0 00:47:07.667 13:09:26 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:47:07.667 13:09:26 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:47:07.667 13:09:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:47:07.667 13:09:26 -- common/autotest_common.sh@10 -- # set +x 00:47:07.667 13:09:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:47:07.667 13:09:26 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:47:07.667 13:09:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:47:07.667 13:09:26 -- common/autotest_common.sh@10 -- # set +x 00:47:07.667 13:09:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:47:07.667 13:09:26 -- host/discovery.sh@72 -- # notify_id=0 00:47:07.667 13:09:26 -- host/discovery.sh@78 -- # get_subsystem_names 00:47:07.667 13:09:26 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:47:07.667 13:09:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:47:07.667 13:09:26 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:47:07.667 13:09:26 -- common/autotest_common.sh@10 -- # set +x 00:47:07.667 13:09:26 -- host/discovery.sh@59 -- # sort 00:47:07.667 13:09:26 -- host/discovery.sh@59 -- # xargs 00:47:07.667 13:09:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:47:07.667 13:09:26 -- host/discovery.sh@78 -- # [[ '' == '' ]] 00:47:07.667 13:09:26 -- host/discovery.sh@79 -- # get_bdev_list 00:47:07.667 13:09:26 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:47:07.667 13:09:26 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:47:07.667 13:09:26 -- host/discovery.sh@55 -- # sort 00:47:07.667 13:09:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:47:07.667 13:09:26 -- common/autotest_common.sh@10 -- # set +x 00:47:07.667 13:09:26 -- host/discovery.sh@55 -- # xargs 00:47:07.667 13:09:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:47:07.667 13:09:26 -- host/discovery.sh@79 -- # [[ '' == '' ]] 00:47:07.667 13:09:26 -- host/discovery.sh@81 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:47:07.667 13:09:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:47:07.667 13:09:26 -- common/autotest_common.sh@10 -- # set +x 00:47:07.667 13:09:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:47:07.667 13:09:26 -- host/discovery.sh@82 -- # get_subsystem_names 00:47:07.667 13:09:26 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:47:07.667 13:09:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:47:07.667 13:09:26 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:47:07.667 13:09:26 -- common/autotest_common.sh@10 -- # set +x 00:47:07.667 13:09:26 -- host/discovery.sh@59 -- # sort 00:47:07.667 13:09:26 -- host/discovery.sh@59 -- # xargs 00:47:07.667 13:09:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:47:07.667 13:09:26 -- host/discovery.sh@82 -- # [[ '' == '' ]] 00:47:07.667 13:09:26 -- host/discovery.sh@83 -- # get_bdev_list 00:47:07.667 13:09:26 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:47:07.667 13:09:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:47:07.667 13:09:26 -- common/autotest_common.sh@10 -- # set +x 00:47:07.667 13:09:26 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:47:07.667 13:09:26 -- host/discovery.sh@55 -- # sort 00:47:07.667 13:09:26 -- host/discovery.sh@55 -- # xargs 00:47:07.667 13:09:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:47:07.667 13:09:27 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:47:07.667 13:09:27 -- host/discovery.sh@85 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:47:07.667 13:09:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:47:07.667 13:09:27 -- common/autotest_common.sh@10 -- # set +x 00:47:07.667 13:09:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:47:07.667 13:09:27 -- host/discovery.sh@86 -- # get_subsystem_names 00:47:07.667 13:09:27 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:47:07.667 13:09:27 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:47:07.667 13:09:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:47:07.667 13:09:27 -- host/discovery.sh@59 -- # sort 00:47:07.667 13:09:27 -- common/autotest_common.sh@10 -- # set +x 00:47:07.667 13:09:27 -- host/discovery.sh@59 -- # xargs 00:47:07.667 13:09:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:47:07.925 13:09:27 -- host/discovery.sh@86 -- # [[ '' == '' ]] 00:47:07.925 13:09:27 -- host/discovery.sh@87 -- # get_bdev_list 00:47:07.925 13:09:27 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:47:07.925 13:09:27 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:47:07.925 13:09:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:47:07.925 13:09:27 -- common/autotest_common.sh@10 -- # set +x 00:47:07.925 13:09:27 -- host/discovery.sh@55 -- # sort 00:47:07.925 13:09:27 -- host/discovery.sh@55 -- # xargs 00:47:07.925 13:09:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:47:07.925 13:09:27 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:47:07.925 13:09:27 -- host/discovery.sh@91 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:47:07.925 13:09:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:47:07.925 13:09:27 -- common/autotest_common.sh@10 -- # set +x 00:47:07.925 [2024-07-22 13:09:27.159000] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:47:07.925 13:09:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:47:07.925 13:09:27 -- host/discovery.sh@92 -- # get_subsystem_names 00:47:07.925 13:09:27 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:47:07.925 13:09:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:47:07.925 13:09:27 -- common/autotest_common.sh@10 -- # set +x 00:47:07.925 13:09:27 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:47:07.925 13:09:27 -- host/discovery.sh@59 -- # sort 00:47:07.925 13:09:27 -- host/discovery.sh@59 -- # xargs 00:47:07.925 13:09:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:47:07.925 13:09:27 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:47:07.925 13:09:27 -- host/discovery.sh@93 -- # get_bdev_list 00:47:07.925 13:09:27 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:47:07.925 13:09:27 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:47:07.925 13:09:27 -- host/discovery.sh@55 -- # sort 00:47:07.925 13:09:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:47:07.925 13:09:27 -- common/autotest_common.sh@10 -- # set +x 00:47:07.925 13:09:27 -- host/discovery.sh@55 -- # xargs 00:47:07.925 13:09:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:47:07.925 13:09:27 -- host/discovery.sh@93 -- # [[ '' == '' ]] 00:47:07.925 13:09:27 -- host/discovery.sh@94 -- # get_notification_count 00:47:07.925 13:09:27 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:47:07.925 13:09:27 -- host/discovery.sh@74 -- # jq '. | length' 00:47:07.925 13:09:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:47:07.925 13:09:27 -- common/autotest_common.sh@10 -- # set +x 00:47:07.925 13:09:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:47:07.925 13:09:27 -- host/discovery.sh@74 -- # notification_count=0 00:47:07.925 13:09:27 -- host/discovery.sh@75 -- # notify_id=0 00:47:07.925 13:09:27 -- host/discovery.sh@95 -- # [[ 0 == 0 ]] 00:47:07.925 13:09:27 -- host/discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:47:07.925 13:09:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:47:07.925 13:09:27 -- common/autotest_common.sh@10 -- # set +x 00:47:07.925 13:09:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:47:07.925 13:09:27 -- host/discovery.sh@100 -- # sleep 1 00:47:08.489 [2024-07-22 13:09:27.813479] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:47:08.489 [2024-07-22 13:09:27.813525] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:47:08.489 [2024-07-22 13:09:27.813543] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:47:08.489 [2024-07-22 13:09:27.899606] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:47:08.746 [2024-07-22 13:09:27.955214] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:47:08.746 [2024-07-22 13:09:27.955240] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:47:09.004 13:09:28 -- host/discovery.sh@101 -- # get_subsystem_names 00:47:09.004 13:09:28 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:47:09.004 13:09:28 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:47:09.004 13:09:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:47:09.004 13:09:28 -- host/discovery.sh@59 -- # sort 00:47:09.004 13:09:28 -- common/autotest_common.sh@10 -- # set +x 00:47:09.004 13:09:28 -- host/discovery.sh@59 -- # xargs 00:47:09.004 13:09:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:47:09.004 13:09:28 -- host/discovery.sh@101 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:47:09.004 13:09:28 -- host/discovery.sh@102 -- # get_bdev_list 00:47:09.004 13:09:28 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:47:09.004 13:09:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:47:09.004 13:09:28 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:47:09.004 13:09:28 -- common/autotest_common.sh@10 -- # set +x 00:47:09.004 13:09:28 -- host/discovery.sh@55 -- # xargs 00:47:09.004 13:09:28 -- host/discovery.sh@55 -- # sort 00:47:09.004 13:09:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:47:09.261 13:09:28 -- host/discovery.sh@102 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:47:09.261 13:09:28 -- host/discovery.sh@103 -- # get_subsystem_paths nvme0 00:47:09.261 13:09:28 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:47:09.261 13:09:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:47:09.261 13:09:28 -- common/autotest_common.sh@10 -- # set +x 00:47:09.261 13:09:28 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:47:09.261 13:09:28 -- host/discovery.sh@63 -- # sort -n 00:47:09.261 13:09:28 -- host/discovery.sh@63 -- # xargs 00:47:09.261 13:09:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:47:09.261 13:09:28 -- host/discovery.sh@103 -- # [[ 4420 == \4\4\2\0 ]] 00:47:09.261 13:09:28 -- host/discovery.sh@104 -- # get_notification_count 00:47:09.261 13:09:28 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:47:09.261 13:09:28 -- host/discovery.sh@74 -- # jq '. | length' 00:47:09.261 13:09:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:47:09.261 13:09:28 -- common/autotest_common.sh@10 -- # set +x 00:47:09.261 13:09:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:47:09.261 13:09:28 -- host/discovery.sh@74 -- # notification_count=1 00:47:09.261 13:09:28 -- host/discovery.sh@75 -- # notify_id=1 00:47:09.261 13:09:28 -- host/discovery.sh@105 -- # [[ 1 == 1 ]] 00:47:09.261 13:09:28 -- host/discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:47:09.261 13:09:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:47:09.261 13:09:28 -- common/autotest_common.sh@10 -- # set +x 00:47:09.261 13:09:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:47:09.261 13:09:28 -- host/discovery.sh@109 -- # sleep 1 00:47:10.192 13:09:29 -- host/discovery.sh@110 -- # get_bdev_list 00:47:10.192 13:09:29 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:47:10.192 13:09:29 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:47:10.192 13:09:29 -- host/discovery.sh@55 -- # sort 00:47:10.192 13:09:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:47:10.192 13:09:29 -- common/autotest_common.sh@10 -- # set +x 00:47:10.192 13:09:29 -- host/discovery.sh@55 -- # xargs 00:47:10.467 13:09:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:47:10.467 13:09:29 -- host/discovery.sh@110 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:47:10.467 13:09:29 -- host/discovery.sh@111 -- # get_notification_count 00:47:10.467 13:09:29 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:47:10.467 13:09:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:47:10.467 13:09:29 -- common/autotest_common.sh@10 -- # set +x 00:47:10.467 13:09:29 -- host/discovery.sh@74 -- # jq '. | length' 00:47:10.467 13:09:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:47:10.467 13:09:29 -- host/discovery.sh@74 -- # notification_count=1 00:47:10.467 13:09:29 -- host/discovery.sh@75 -- # notify_id=2 00:47:10.467 13:09:29 -- host/discovery.sh@112 -- # [[ 1 == 1 ]] 00:47:10.467 13:09:29 -- host/discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:47:10.467 13:09:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:47:10.467 13:09:29 -- common/autotest_common.sh@10 -- # set +x 00:47:10.467 [2024-07-22 13:09:29.696329] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:47:10.467 [2024-07-22 13:09:29.696765] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:47:10.467 [2024-07-22 13:09:29.696793] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:47:10.467 13:09:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:47:10.467 13:09:29 -- host/discovery.sh@117 -- # sleep 1 00:47:10.467 [2024-07-22 13:09:29.782840] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:47:10.467 [2024-07-22 13:09:29.846074] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:47:10.467 [2024-07-22 13:09:29.846114] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:47:10.467 [2024-07-22 13:09:29.846121] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:47:11.410 13:09:30 -- host/discovery.sh@118 -- # get_subsystem_names 00:47:11.410 13:09:30 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:47:11.410 13:09:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:47:11.410 13:09:30 -- common/autotest_common.sh@10 -- # set +x 00:47:11.410 13:09:30 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:47:11.410 13:09:30 -- host/discovery.sh@59 -- # sort 00:47:11.410 13:09:30 -- host/discovery.sh@59 -- # xargs 00:47:11.410 13:09:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:47:11.410 13:09:30 -- host/discovery.sh@118 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:47:11.410 13:09:30 -- host/discovery.sh@119 -- # get_bdev_list 00:47:11.410 13:09:30 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:47:11.410 13:09:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:47:11.410 13:09:30 -- common/autotest_common.sh@10 -- # set +x 00:47:11.410 13:09:30 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:47:11.410 13:09:30 -- host/discovery.sh@55 -- # xargs 00:47:11.410 13:09:30 -- host/discovery.sh@55 -- # sort 00:47:11.410 13:09:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:47:11.410 13:09:30 -- host/discovery.sh@119 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:47:11.410 13:09:30 -- host/discovery.sh@120 -- # get_subsystem_paths nvme0 00:47:11.410 13:09:30 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:47:11.410 13:09:30 -- host/discovery.sh@63 -- # sort -n 00:47:11.410 13:09:30 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:47:11.410 13:09:30 -- host/discovery.sh@63 -- # xargs 00:47:11.410 13:09:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:47:11.410 13:09:30 -- common/autotest_common.sh@10 -- # set +x 00:47:11.668 13:09:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:47:11.668 13:09:30 -- host/discovery.sh@120 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:47:11.668 13:09:30 -- host/discovery.sh@121 -- # get_notification_count 00:47:11.668 13:09:30 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:47:11.668 13:09:30 -- host/discovery.sh@74 -- # jq '. | length' 00:47:11.668 13:09:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:47:11.668 13:09:30 -- common/autotest_common.sh@10 -- # set +x 00:47:11.668 13:09:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:47:11.668 13:09:30 -- host/discovery.sh@74 -- # notification_count=0 00:47:11.668 13:09:30 -- host/discovery.sh@75 -- # notify_id=2 00:47:11.668 13:09:30 -- host/discovery.sh@122 -- # [[ 0 == 0 ]] 00:47:11.668 13:09:30 -- host/discovery.sh@126 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:47:11.668 13:09:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:47:11.668 13:09:30 -- common/autotest_common.sh@10 -- # set +x 00:47:11.668 [2024-07-22 13:09:30.929168] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:47:11.668 [2024-07-22 13:09:30.929388] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:47:11.668 [2024-07-22 13:09:30.933300] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:47:11.668 [2024-07-22 13:09:30.933489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c 13:09:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:47:11.668 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:11.668 [2024-07-22 13:09:30.933631] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 ns 13:09:30 -- host/discovery.sh@127 -- # sleep 1 00:47:11.668 id:0 cdw10:00000000 cdw11:00000000 00:47:11.668 [2024-07-22 13:09:30.933772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:11.668 [2024-07-22 13:09:30.933879] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:47:11.668 [2024-07-22 13:09:30.934000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:11.668 [2024-07-22 13:09:30.934019] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:47:11.668 [2024-07-22 13:09:30.934030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:11.668 [2024-07-22 13:09:30.934040] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8fdea0 is same with the state(5) to be set 00:47:11.668 [2024-07-22 13:09:30.943258] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8fdea0 (9): Bad file descriptor 00:47:11.668 [2024-07-22 13:09:30.953275] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:47:11.668 [2024-07-22 13:09:30.953388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:11.668 [2024-07-22 13:09:30.953431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:11.668 [2024-07-22 13:09:30.953446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8fdea0 with addr=10.0.0.2, port=4420 00:47:11.668 [2024-07-22 13:09:30.953463] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8fdea0 is same with the state(5) to be set 00:47:11.668 [2024-07-22 13:09:30.953478] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8fdea0 (9): Bad file descriptor 00:47:11.668 [2024-07-22 13:09:30.953491] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:47:11.668 [2024-07-22 13:09:30.953500] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:47:11.668 [2024-07-22 13:09:30.953540] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:47:11.669 [2024-07-22 13:09:30.953571] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:47:11.669 [2024-07-22 13:09:30.963337] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:47:11.669 [2024-07-22 13:09:30.963438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:11.669 [2024-07-22 13:09:30.963479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:11.669 [2024-07-22 13:09:30.963493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8fdea0 with addr=10.0.0.2, port=4420 00:47:11.669 [2024-07-22 13:09:30.963503] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8fdea0 is same with the state(5) to be set 00:47:11.669 [2024-07-22 13:09:30.963517] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8fdea0 (9): Bad file descriptor 00:47:11.669 [2024-07-22 13:09:30.963529] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:47:11.669 [2024-07-22 13:09:30.963537] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:47:11.669 [2024-07-22 13:09:30.963545] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:47:11.669 [2024-07-22 13:09:30.963558] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:47:11.669 [2024-07-22 13:09:30.973414] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:47:11.669 [2024-07-22 13:09:30.973507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:11.669 [2024-07-22 13:09:30.973547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:11.669 [2024-07-22 13:09:30.973561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8fdea0 with addr=10.0.0.2, port=4420 00:47:11.669 [2024-07-22 13:09:30.973571] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8fdea0 is same with the state(5) to be set 00:47:11.669 [2024-07-22 13:09:30.973584] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8fdea0 (9): Bad file descriptor 00:47:11.669 [2024-07-22 13:09:30.973597] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:47:11.669 [2024-07-22 13:09:30.973605] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:47:11.669 [2024-07-22 13:09:30.973613] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:47:11.669 [2024-07-22 13:09:30.973625] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:47:11.669 [2024-07-22 13:09:30.983478] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:47:11.669 [2024-07-22 13:09:30.983583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:11.669 [2024-07-22 13:09:30.983624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:11.669 [2024-07-22 13:09:30.983639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8fdea0 with addr=10.0.0.2, port=4420 00:47:11.669 [2024-07-22 13:09:30.983648] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8fdea0 is same with the state(5) to be set 00:47:11.669 [2024-07-22 13:09:30.983662] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8fdea0 (9): Bad file descriptor 00:47:11.669 [2024-07-22 13:09:30.983674] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:47:11.669 [2024-07-22 13:09:30.983682] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:47:11.669 [2024-07-22 13:09:30.983690] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:47:11.669 [2024-07-22 13:09:30.983702] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:47:11.669 [2024-07-22 13:09:30.993567] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:47:11.669 [2024-07-22 13:09:30.993666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:11.669 [2024-07-22 13:09:30.993705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:11.669 [2024-07-22 13:09:30.993719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8fdea0 with addr=10.0.0.2, port=4420 00:47:11.669 [2024-07-22 13:09:30.993729] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8fdea0 is same with the state(5) to be set 00:47:11.669 [2024-07-22 13:09:30.993743] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8fdea0 (9): Bad file descriptor 00:47:11.669 [2024-07-22 13:09:30.993755] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:47:11.669 [2024-07-22 13:09:30.993763] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:47:11.669 [2024-07-22 13:09:30.993770] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:47:11.669 [2024-07-22 13:09:30.993782] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:47:11.669 [2024-07-22 13:09:31.003624] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:47:11.669 [2024-07-22 13:09:31.003721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:11.669 [2024-07-22 13:09:31.003761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:11.669 [2024-07-22 13:09:31.003776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8fdea0 with addr=10.0.0.2, port=4420 00:47:11.669 [2024-07-22 13:09:31.003785] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8fdea0 is same with the state(5) to be set 00:47:11.669 [2024-07-22 13:09:31.003798] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8fdea0 (9): Bad file descriptor 00:47:11.669 [2024-07-22 13:09:31.003811] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:47:11.669 [2024-07-22 13:09:31.003826] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:47:11.669 [2024-07-22 13:09:31.003834] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:47:11.669 [2024-07-22 13:09:31.003846] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:47:11.669 [2024-07-22 13:09:31.013697] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:47:11.669 [2024-07-22 13:09:31.013794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:11.669 [2024-07-22 13:09:31.013834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:11.669 [2024-07-22 13:09:31.013863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8fdea0 with addr=10.0.0.2, port=4420 00:47:11.669 [2024-07-22 13:09:31.013872] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8fdea0 is same with the state(5) to be set 00:47:11.669 [2024-07-22 13:09:31.013886] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8fdea0 (9): Bad file descriptor 00:47:11.669 [2024-07-22 13:09:31.013898] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:47:11.669 [2024-07-22 13:09:31.013906] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:47:11.669 [2024-07-22 13:09:31.013914] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:47:11.669 [2024-07-22 13:09:31.013927] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:47:11.669 [2024-07-22 13:09:31.016293] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:47:11.669 [2024-07-22 13:09:31.016334] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:47:12.600 13:09:31 -- host/discovery.sh@128 -- # get_subsystem_names 00:47:12.600 13:09:31 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:47:12.600 13:09:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:47:12.600 13:09:31 -- common/autotest_common.sh@10 -- # set +x 00:47:12.600 13:09:31 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:47:12.600 13:09:31 -- host/discovery.sh@59 -- # sort 00:47:12.600 13:09:31 -- host/discovery.sh@59 -- # xargs 00:47:12.600 13:09:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:47:12.600 13:09:31 -- host/discovery.sh@128 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:47:12.600 13:09:31 -- host/discovery.sh@129 -- # get_bdev_list 00:47:12.600 13:09:31 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:47:12.600 13:09:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:47:12.600 13:09:31 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:47:12.600 13:09:32 -- common/autotest_common.sh@10 -- # set +x 00:47:12.600 13:09:32 -- host/discovery.sh@55 -- # sort 00:47:12.600 13:09:32 -- host/discovery.sh@55 -- # xargs 00:47:12.857 13:09:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:47:12.857 13:09:32 -- host/discovery.sh@129 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:47:12.857 13:09:32 -- host/discovery.sh@130 -- # get_subsystem_paths nvme0 00:47:12.857 13:09:32 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:47:12.857 13:09:32 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:47:12.857 13:09:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:47:12.857 13:09:32 -- common/autotest_common.sh@10 -- # set +x 00:47:12.857 13:09:32 -- host/discovery.sh@63 -- # sort -n 00:47:12.857 13:09:32 -- host/discovery.sh@63 -- # xargs 00:47:12.857 13:09:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:47:12.857 13:09:32 -- host/discovery.sh@130 -- # [[ 4421 == \4\4\2\1 ]] 00:47:12.857 13:09:32 -- host/discovery.sh@131 -- # get_notification_count 00:47:12.857 13:09:32 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:47:12.857 13:09:32 -- host/discovery.sh@74 -- # jq '. | length' 00:47:12.857 13:09:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:47:12.857 13:09:32 -- common/autotest_common.sh@10 -- # set +x 00:47:12.857 13:09:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:47:12.857 13:09:32 -- host/discovery.sh@74 -- # notification_count=0 00:47:12.857 13:09:32 -- host/discovery.sh@75 -- # notify_id=2 00:47:12.857 13:09:32 -- host/discovery.sh@132 -- # [[ 0 == 0 ]] 00:47:12.857 13:09:32 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:47:12.857 13:09:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:47:12.857 13:09:32 -- common/autotest_common.sh@10 -- # set +x 00:47:12.857 13:09:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:47:12.857 13:09:32 -- host/discovery.sh@135 -- # sleep 1 00:47:13.792 13:09:33 -- host/discovery.sh@136 -- # get_subsystem_names 00:47:13.792 13:09:33 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:47:13.792 13:09:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:47:13.792 13:09:33 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:47:13.792 13:09:33 -- common/autotest_common.sh@10 -- # set +x 00:47:13.792 13:09:33 -- host/discovery.sh@59 -- # sort 00:47:13.792 13:09:33 -- host/discovery.sh@59 -- # xargs 00:47:13.792 13:09:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:47:14.048 13:09:33 -- host/discovery.sh@136 -- # [[ '' == '' ]] 00:47:14.048 13:09:33 -- host/discovery.sh@137 -- # get_bdev_list 00:47:14.048 13:09:33 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:47:14.048 13:09:33 -- host/discovery.sh@55 -- # sort 00:47:14.048 13:09:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:47:14.048 13:09:33 -- host/discovery.sh@55 -- # xargs 00:47:14.048 13:09:33 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:47:14.048 13:09:33 -- common/autotest_common.sh@10 -- # set +x 00:47:14.048 13:09:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:47:14.048 13:09:33 -- host/discovery.sh@137 -- # [[ '' == '' ]] 00:47:14.048 13:09:33 -- host/discovery.sh@138 -- # get_notification_count 00:47:14.048 13:09:33 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:47:14.048 13:09:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:47:14.048 13:09:33 -- common/autotest_common.sh@10 -- # set +x 00:47:14.048 13:09:33 -- host/discovery.sh@74 -- # jq '. | length' 00:47:14.048 13:09:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:47:14.048 13:09:33 -- host/discovery.sh@74 -- # notification_count=2 00:47:14.048 13:09:33 -- host/discovery.sh@75 -- # notify_id=4 00:47:14.048 13:09:33 -- host/discovery.sh@139 -- # [[ 2 == 2 ]] 00:47:14.048 13:09:33 -- host/discovery.sh@142 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:47:14.048 13:09:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:47:14.048 13:09:33 -- common/autotest_common.sh@10 -- # set +x 00:47:14.995 [2024-07-22 13:09:34.374604] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:47:14.995 [2024-07-22 13:09:34.374627] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:47:14.995 [2024-07-22 13:09:34.374645] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:47:15.253 [2024-07-22 13:09:34.460722] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:47:15.253 [2024-07-22 13:09:34.520040] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:47:15.253 [2024-07-22 13:09:34.520313] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:47:15.253 13:09:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:47:15.253 13:09:34 -- host/discovery.sh@144 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:47:15.253 13:09:34 -- common/autotest_common.sh@640 -- # local es=0 00:47:15.253 13:09:34 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:47:15.253 13:09:34 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:47:15.253 13:09:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:47:15.253 13:09:34 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:47:15.253 13:09:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:47:15.253 13:09:34 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:47:15.253 13:09:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:47:15.253 13:09:34 -- common/autotest_common.sh@10 -- # set +x 00:47:15.253 2024/07/22 13:09:34 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:47:15.253 request: 00:47:15.253 { 00:47:15.253 "method": "bdev_nvme_start_discovery", 00:47:15.253 "params": { 00:47:15.253 "name": "nvme", 00:47:15.253 "trtype": "tcp", 00:47:15.253 "traddr": "10.0.0.2", 00:47:15.253 "hostnqn": "nqn.2021-12.io.spdk:test", 00:47:15.253 "adrfam": "ipv4", 00:47:15.253 "trsvcid": "8009", 00:47:15.253 "wait_for_attach": true 00:47:15.253 } 00:47:15.253 } 00:47:15.253 Got JSON-RPC error response 00:47:15.253 GoRPCClient: error on JSON-RPC call 00:47:15.253 13:09:34 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:47:15.253 13:09:34 -- common/autotest_common.sh@643 -- # es=1 00:47:15.253 13:09:34 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:47:15.253 13:09:34 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:47:15.253 13:09:34 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:47:15.253 13:09:34 -- host/discovery.sh@146 -- # get_discovery_ctrlrs 00:47:15.253 13:09:34 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:47:15.253 13:09:34 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:47:15.253 13:09:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:47:15.253 13:09:34 -- host/discovery.sh@67 -- # xargs 00:47:15.253 13:09:34 -- host/discovery.sh@67 -- # sort 00:47:15.253 13:09:34 -- common/autotest_common.sh@10 -- # set +x 00:47:15.253 13:09:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:47:15.253 13:09:34 -- host/discovery.sh@146 -- # [[ nvme == \n\v\m\e ]] 00:47:15.253 13:09:34 -- host/discovery.sh@147 -- # get_bdev_list 00:47:15.253 13:09:34 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:47:15.253 13:09:34 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:47:15.253 13:09:34 -- host/discovery.sh@55 -- # sort 00:47:15.253 13:09:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:47:15.253 13:09:34 -- host/discovery.sh@55 -- # xargs 00:47:15.253 13:09:34 -- common/autotest_common.sh@10 -- # set +x 00:47:15.254 13:09:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:47:15.254 13:09:34 -- host/discovery.sh@147 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:47:15.254 13:09:34 -- host/discovery.sh@150 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:47:15.254 13:09:34 -- common/autotest_common.sh@640 -- # local es=0 00:47:15.254 13:09:34 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:47:15.254 13:09:34 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:47:15.254 13:09:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:47:15.254 13:09:34 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:47:15.254 13:09:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:47:15.254 13:09:34 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:47:15.254 13:09:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:47:15.254 13:09:34 -- common/autotest_common.sh@10 -- # set +x 00:47:15.254 2024/07/22 13:09:34 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:47:15.254 request: 00:47:15.254 { 00:47:15.254 "method": "bdev_nvme_start_discovery", 00:47:15.254 "params": { 00:47:15.254 "name": "nvme_second", 00:47:15.254 "trtype": "tcp", 00:47:15.254 "traddr": "10.0.0.2", 00:47:15.254 "hostnqn": "nqn.2021-12.io.spdk:test", 00:47:15.254 "adrfam": "ipv4", 00:47:15.254 "trsvcid": "8009", 00:47:15.254 "wait_for_attach": true 00:47:15.254 } 00:47:15.254 } 00:47:15.254 Got JSON-RPC error response 00:47:15.254 GoRPCClient: error on JSON-RPC call 00:47:15.254 13:09:34 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:47:15.254 13:09:34 -- common/autotest_common.sh@643 -- # es=1 00:47:15.254 13:09:34 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:47:15.254 13:09:34 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:47:15.254 13:09:34 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:47:15.254 13:09:34 -- host/discovery.sh@152 -- # get_discovery_ctrlrs 00:47:15.511 13:09:34 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:47:15.511 13:09:34 -- host/discovery.sh@67 -- # sort 00:47:15.511 13:09:34 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:47:15.511 13:09:34 -- host/discovery.sh@67 -- # xargs 00:47:15.511 13:09:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:47:15.511 13:09:34 -- common/autotest_common.sh@10 -- # set +x 00:47:15.511 13:09:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:47:15.511 13:09:34 -- host/discovery.sh@152 -- # [[ nvme == \n\v\m\e ]] 00:47:15.511 13:09:34 -- host/discovery.sh@153 -- # get_bdev_list 00:47:15.511 13:09:34 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:47:15.511 13:09:34 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:47:15.511 13:09:34 -- host/discovery.sh@55 -- # sort 00:47:15.511 13:09:34 -- host/discovery.sh@55 -- # xargs 00:47:15.511 13:09:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:47:15.511 13:09:34 -- common/autotest_common.sh@10 -- # set +x 00:47:15.511 13:09:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:47:15.511 13:09:34 -- host/discovery.sh@153 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:47:15.511 13:09:34 -- host/discovery.sh@156 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:47:15.511 13:09:34 -- common/autotest_common.sh@640 -- # local es=0 00:47:15.511 13:09:34 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:47:15.511 13:09:34 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:47:15.511 13:09:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:47:15.511 13:09:34 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:47:15.511 13:09:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:47:15.511 13:09:34 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:47:15.511 13:09:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:47:15.511 13:09:34 -- common/autotest_common.sh@10 -- # set +x 00:47:16.444 [2024-07-22 13:09:35.797881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:16.444 [2024-07-22 13:09:35.797985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:16.444 [2024-07-22 13:09:35.798003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944630 with addr=10.0.0.2, port=8010 00:47:16.444 [2024-07-22 13:09:35.798020] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:47:16.444 [2024-07-22 13:09:35.798029] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:47:16.444 [2024-07-22 13:09:35.798037] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:47:17.815 [2024-07-22 13:09:36.797889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:17.815 [2024-07-22 13:09:36.797998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:17.815 [2024-07-22 13:09:36.798017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944630 with addr=10.0.0.2, port=8010 00:47:17.815 [2024-07-22 13:09:36.798035] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:47:17.815 [2024-07-22 13:09:36.798046] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:47:17.815 [2024-07-22 13:09:36.798055] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:47:18.380 [2024-07-22 13:09:37.797776] bdev_nvme.c:6802:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:47:18.380 2024/07/22 13:09:37 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 attach_timeout_ms:3000 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8010 trtype:tcp], err: error received for bdev_nvme_start_discovery method, err: Code=-110 Msg=Connection timed out 00:47:18.638 request: 00:47:18.639 { 00:47:18.639 "method": "bdev_nvme_start_discovery", 00:47:18.639 "params": { 00:47:18.639 "name": "nvme_second", 00:47:18.639 "trtype": "tcp", 00:47:18.639 "traddr": "10.0.0.2", 00:47:18.639 "hostnqn": "nqn.2021-12.io.spdk:test", 00:47:18.639 "adrfam": "ipv4", 00:47:18.639 "trsvcid": "8010", 00:47:18.639 "attach_timeout_ms": 3000 00:47:18.639 } 00:47:18.639 } 00:47:18.639 Got JSON-RPC error response 00:47:18.639 GoRPCClient: error on JSON-RPC call 00:47:18.639 13:09:37 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:47:18.639 13:09:37 -- common/autotest_common.sh@643 -- # es=1 00:47:18.639 13:09:37 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:47:18.639 13:09:37 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:47:18.639 13:09:37 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:47:18.639 13:09:37 -- host/discovery.sh@158 -- # get_discovery_ctrlrs 00:47:18.639 13:09:37 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:47:18.639 13:09:37 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:47:18.639 13:09:37 -- host/discovery.sh@67 -- # sort 00:47:18.639 13:09:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:47:18.639 13:09:37 -- host/discovery.sh@67 -- # xargs 00:47:18.639 13:09:37 -- common/autotest_common.sh@10 -- # set +x 00:47:18.639 13:09:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:47:18.639 13:09:37 -- host/discovery.sh@158 -- # [[ nvme == \n\v\m\e ]] 00:47:18.639 13:09:37 -- host/discovery.sh@160 -- # trap - SIGINT SIGTERM EXIT 00:47:18.639 13:09:37 -- host/discovery.sh@162 -- # kill 95496 00:47:18.639 13:09:37 -- host/discovery.sh@163 -- # nvmftestfini 00:47:18.639 13:09:37 -- nvmf/common.sh@476 -- # nvmfcleanup 00:47:18.639 13:09:37 -- nvmf/common.sh@116 -- # sync 00:47:18.639 13:09:37 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:47:18.639 13:09:37 -- nvmf/common.sh@119 -- # set +e 00:47:18.639 13:09:37 -- nvmf/common.sh@120 -- # for i in {1..20} 00:47:18.639 13:09:37 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:47:18.639 rmmod nvme_tcp 00:47:18.639 rmmod nvme_fabrics 00:47:18.639 rmmod nvme_keyring 00:47:18.639 13:09:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:47:18.639 13:09:37 -- nvmf/common.sh@123 -- # set -e 00:47:18.639 13:09:37 -- nvmf/common.sh@124 -- # return 0 00:47:18.639 13:09:37 -- nvmf/common.sh@477 -- # '[' -n 95445 ']' 00:47:18.639 13:09:37 -- nvmf/common.sh@478 -- # killprocess 95445 00:47:18.639 13:09:37 -- common/autotest_common.sh@926 -- # '[' -z 95445 ']' 00:47:18.639 13:09:37 -- common/autotest_common.sh@930 -- # kill -0 95445 00:47:18.639 13:09:37 -- common/autotest_common.sh@931 -- # uname 00:47:18.639 13:09:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:47:18.639 13:09:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 95445 00:47:18.639 killing process with pid 95445 00:47:18.639 13:09:38 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:47:18.639 13:09:38 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:47:18.639 13:09:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 95445' 00:47:18.639 13:09:38 -- common/autotest_common.sh@945 -- # kill 95445 00:47:18.639 13:09:38 -- common/autotest_common.sh@950 -- # wait 95445 00:47:18.897 13:09:38 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:47:18.897 13:09:38 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:47:18.897 13:09:38 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:47:18.897 13:09:38 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:47:18.897 13:09:38 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:47:18.897 13:09:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:47:18.897 13:09:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:47:18.897 13:09:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:47:18.897 13:09:38 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:47:18.897 ************************************ 00:47:18.897 END TEST nvmf_discovery 00:47:18.897 ************************************ 00:47:18.897 00:47:18.897 real 0m14.095s 00:47:18.897 user 0m27.581s 00:47:18.897 sys 0m1.737s 00:47:18.897 13:09:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:47:18.897 13:09:38 -- common/autotest_common.sh@10 -- # set +x 00:47:18.897 13:09:38 -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:47:18.897 13:09:38 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:47:18.897 13:09:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:47:18.897 13:09:38 -- common/autotest_common.sh@10 -- # set +x 00:47:18.897 ************************************ 00:47:18.897 START TEST nvmf_discovery_remove_ifc 00:47:18.897 ************************************ 00:47:18.897 13:09:38 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:47:19.155 * Looking for test storage... 00:47:19.155 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:47:19.155 13:09:38 -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:47:19.155 13:09:38 -- nvmf/common.sh@7 -- # uname -s 00:47:19.155 13:09:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:47:19.155 13:09:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:47:19.155 13:09:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:47:19.155 13:09:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:47:19.155 13:09:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:47:19.156 13:09:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:47:19.156 13:09:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:47:19.156 13:09:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:47:19.156 13:09:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:47:19.156 13:09:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:47:19.156 13:09:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:864ae4a3-cd96-439b-8ef2-6dab4d992115 00:47:19.156 13:09:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=864ae4a3-cd96-439b-8ef2-6dab4d992115 00:47:19.156 13:09:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:47:19.156 13:09:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:47:19.156 13:09:38 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:47:19.156 13:09:38 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:47:19.156 13:09:38 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:47:19.156 13:09:38 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:47:19.156 13:09:38 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:47:19.156 13:09:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:19.156 13:09:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:19.156 13:09:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:19.156 13:09:38 -- paths/export.sh@5 -- # export PATH 00:47:19.156 13:09:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:19.156 13:09:38 -- nvmf/common.sh@46 -- # : 0 00:47:19.156 13:09:38 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:47:19.156 13:09:38 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:47:19.156 13:09:38 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:47:19.156 13:09:38 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:47:19.156 13:09:38 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:47:19.156 13:09:38 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:47:19.156 13:09:38 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:47:19.156 13:09:38 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:47:19.156 13:09:38 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:47:19.156 13:09:38 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:47:19.156 13:09:38 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:47:19.156 13:09:38 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:47:19.156 13:09:38 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:47:19.156 13:09:38 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:47:19.156 13:09:38 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:47:19.156 13:09:38 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:47:19.156 13:09:38 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:47:19.156 13:09:38 -- nvmf/common.sh@436 -- # prepare_net_devs 00:47:19.156 13:09:38 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:47:19.156 13:09:38 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:47:19.156 13:09:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:47:19.156 13:09:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:47:19.156 13:09:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:47:19.156 13:09:38 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:47:19.156 13:09:38 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:47:19.156 13:09:38 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:47:19.156 13:09:38 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:47:19.156 13:09:38 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:47:19.156 13:09:38 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:47:19.156 13:09:38 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:47:19.156 13:09:38 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:47:19.156 13:09:38 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:47:19.156 13:09:38 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:47:19.156 13:09:38 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:47:19.156 13:09:38 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:47:19.156 13:09:38 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:47:19.156 13:09:38 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:47:19.156 13:09:38 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:47:19.156 13:09:38 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:47:19.156 13:09:38 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:47:19.156 13:09:38 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:47:19.156 13:09:38 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:47:19.156 13:09:38 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:47:19.156 Cannot find device "nvmf_tgt_br" 00:47:19.156 13:09:38 -- nvmf/common.sh@154 -- # true 00:47:19.156 13:09:38 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:47:19.156 Cannot find device "nvmf_tgt_br2" 00:47:19.156 13:09:38 -- nvmf/common.sh@155 -- # true 00:47:19.156 13:09:38 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:47:19.156 13:09:38 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:47:19.156 Cannot find device "nvmf_tgt_br" 00:47:19.156 13:09:38 -- nvmf/common.sh@157 -- # true 00:47:19.156 13:09:38 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:47:19.156 Cannot find device "nvmf_tgt_br2" 00:47:19.156 13:09:38 -- nvmf/common.sh@158 -- # true 00:47:19.156 13:09:38 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:47:19.156 13:09:38 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:47:19.156 13:09:38 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:47:19.156 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:47:19.156 13:09:38 -- nvmf/common.sh@161 -- # true 00:47:19.156 13:09:38 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:47:19.156 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:47:19.156 13:09:38 -- nvmf/common.sh@162 -- # true 00:47:19.156 13:09:38 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:47:19.156 13:09:38 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:47:19.156 13:09:38 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:47:19.156 13:09:38 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:47:19.156 13:09:38 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:47:19.156 13:09:38 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:47:19.421 13:09:38 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:47:19.421 13:09:38 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:47:19.421 13:09:38 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:47:19.421 13:09:38 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:47:19.421 13:09:38 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:47:19.421 13:09:38 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:47:19.421 13:09:38 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:47:19.421 13:09:38 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:47:19.421 13:09:38 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:47:19.421 13:09:38 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:47:19.421 13:09:38 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:47:19.421 13:09:38 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:47:19.421 13:09:38 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:47:19.421 13:09:38 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:47:19.421 13:09:38 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:47:19.421 13:09:38 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:47:19.421 13:09:38 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:47:19.421 13:09:38 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:47:19.421 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:47:19.421 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:47:19.421 00:47:19.421 --- 10.0.0.2 ping statistics --- 00:47:19.421 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:19.421 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:47:19.421 13:09:38 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:47:19.421 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:47:19.421 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.031 ms 00:47:19.421 00:47:19.421 --- 10.0.0.3 ping statistics --- 00:47:19.421 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:19.421 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:47:19.421 13:09:38 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:47:19.421 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:47:19.421 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.016 ms 00:47:19.421 00:47:19.421 --- 10.0.0.1 ping statistics --- 00:47:19.421 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:19.421 rtt min/avg/max/mdev = 0.016/0.016/0.016/0.000 ms 00:47:19.421 13:09:38 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:47:19.421 13:09:38 -- nvmf/common.sh@421 -- # return 0 00:47:19.421 13:09:38 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:47:19.421 13:09:38 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:47:19.421 13:09:38 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:47:19.421 13:09:38 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:47:19.421 13:09:38 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:47:19.421 13:09:38 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:47:19.421 13:09:38 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:47:19.421 13:09:38 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:47:19.421 13:09:38 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:47:19.421 13:09:38 -- common/autotest_common.sh@712 -- # xtrace_disable 00:47:19.421 13:09:38 -- common/autotest_common.sh@10 -- # set +x 00:47:19.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:19.421 13:09:38 -- nvmf/common.sh@469 -- # nvmfpid=96014 00:47:19.421 13:09:38 -- nvmf/common.sh@470 -- # waitforlisten 96014 00:47:19.421 13:09:38 -- common/autotest_common.sh@819 -- # '[' -z 96014 ']' 00:47:19.421 13:09:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:19.421 13:09:38 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:47:19.421 13:09:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:47:19.421 13:09:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:19.422 13:09:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:47:19.422 13:09:38 -- common/autotest_common.sh@10 -- # set +x 00:47:19.422 [2024-07-22 13:09:38.787242] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:47:19.422 [2024-07-22 13:09:38.787327] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:47:19.697 [2024-07-22 13:09:38.927802] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:19.697 [2024-07-22 13:09:38.988873] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:47:19.697 [2024-07-22 13:09:38.989002] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:47:19.697 [2024-07-22 13:09:38.989015] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:47:19.697 [2024-07-22 13:09:38.989022] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:47:19.697 [2024-07-22 13:09:38.989049] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:47:20.629 13:09:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:47:20.629 13:09:39 -- common/autotest_common.sh@852 -- # return 0 00:47:20.629 13:09:39 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:47:20.629 13:09:39 -- common/autotest_common.sh@718 -- # xtrace_disable 00:47:20.629 13:09:39 -- common/autotest_common.sh@10 -- # set +x 00:47:20.629 13:09:39 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:47:20.629 13:09:39 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:47:20.629 13:09:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:47:20.629 13:09:39 -- common/autotest_common.sh@10 -- # set +x 00:47:20.629 [2024-07-22 13:09:39.752666] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:47:20.629 [2024-07-22 13:09:39.760766] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:47:20.629 null0 00:47:20.629 [2024-07-22 13:09:39.792730] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:47:20.629 13:09:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:47:20.629 13:09:39 -- host/discovery_remove_ifc.sh@59 -- # hostpid=96066 00:47:20.630 13:09:39 -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:47:20.630 13:09:39 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 96066 /tmp/host.sock 00:47:20.630 13:09:39 -- common/autotest_common.sh@819 -- # '[' -z 96066 ']' 00:47:20.630 13:09:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:47:20.630 13:09:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:47:20.630 13:09:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:47:20.630 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:47:20.630 13:09:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:47:20.630 13:09:39 -- common/autotest_common.sh@10 -- # set +x 00:47:20.630 [2024-07-22 13:09:39.869482] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:47:20.630 [2024-07-22 13:09:39.869789] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96066 ] 00:47:20.630 [2024-07-22 13:09:40.002710] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:20.887 [2024-07-22 13:09:40.069247] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:47:20.887 [2024-07-22 13:09:40.069661] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:47:21.452 13:09:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:47:21.452 13:09:40 -- common/autotest_common.sh@852 -- # return 0 00:47:21.452 13:09:40 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:47:21.452 13:09:40 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:47:21.452 13:09:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:47:21.452 13:09:40 -- common/autotest_common.sh@10 -- # set +x 00:47:21.452 13:09:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:47:21.452 13:09:40 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:47:21.452 13:09:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:47:21.452 13:09:40 -- common/autotest_common.sh@10 -- # set +x 00:47:21.452 13:09:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:47:21.452 13:09:40 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:47:21.452 13:09:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:47:21.452 13:09:40 -- common/autotest_common.sh@10 -- # set +x 00:47:22.825 [2024-07-22 13:09:41.845604] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:47:22.825 [2024-07-22 13:09:41.845630] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:47:22.825 [2024-07-22 13:09:41.845646] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:47:22.825 [2024-07-22 13:09:41.931730] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:47:22.825 [2024-07-22 13:09:41.987441] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:47:22.825 [2024-07-22 13:09:41.987482] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:47:22.825 [2024-07-22 13:09:41.987505] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:47:22.825 [2024-07-22 13:09:41.987519] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:47:22.826 [2024-07-22 13:09:41.987555] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:47:22.826 13:09:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:47:22.826 13:09:41 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:47:22.826 13:09:41 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:47:22.826 13:09:41 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:47:22.826 13:09:41 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:47:22.826 [2024-07-22 13:09:41.994357] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x12187d0 was disconnected and freed. delete nvme_qpair. 00:47:22.826 13:09:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:47:22.826 13:09:41 -- common/autotest_common.sh@10 -- # set +x 00:47:22.826 13:09:41 -- host/discovery_remove_ifc.sh@29 -- # sort 00:47:22.826 13:09:41 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:47:22.826 13:09:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:47:22.826 13:09:42 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:47:22.826 13:09:42 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:47:22.826 13:09:42 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:47:22.826 13:09:42 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:47:22.826 13:09:42 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:47:22.826 13:09:42 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:47:22.826 13:09:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:47:22.826 13:09:42 -- common/autotest_common.sh@10 -- # set +x 00:47:22.826 13:09:42 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:47:22.826 13:09:42 -- host/discovery_remove_ifc.sh@29 -- # sort 00:47:22.826 13:09:42 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:47:22.826 13:09:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:47:22.826 13:09:42 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:47:22.826 13:09:42 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:47:23.758 13:09:43 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:47:23.758 13:09:43 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:47:23.758 13:09:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:47:23.758 13:09:43 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:47:23.758 13:09:43 -- common/autotest_common.sh@10 -- # set +x 00:47:23.758 13:09:43 -- host/discovery_remove_ifc.sh@29 -- # sort 00:47:23.758 13:09:43 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:47:23.758 13:09:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:47:24.015 13:09:43 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:47:24.015 13:09:43 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:47:24.948 13:09:44 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:47:24.948 13:09:44 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:47:24.948 13:09:44 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:47:24.948 13:09:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:47:24.948 13:09:44 -- common/autotest_common.sh@10 -- # set +x 00:47:24.948 13:09:44 -- host/discovery_remove_ifc.sh@29 -- # sort 00:47:24.948 13:09:44 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:47:24.948 13:09:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:47:24.948 13:09:44 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:47:24.948 13:09:44 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:47:25.881 13:09:45 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:47:25.881 13:09:45 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:47:25.881 13:09:45 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:47:25.881 13:09:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:47:25.881 13:09:45 -- common/autotest_common.sh@10 -- # set +x 00:47:25.881 13:09:45 -- host/discovery_remove_ifc.sh@29 -- # sort 00:47:25.881 13:09:45 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:47:25.881 13:09:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:47:26.139 13:09:45 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:47:26.139 13:09:45 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:47:27.071 13:09:46 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:47:27.071 13:09:46 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:47:27.071 13:09:46 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:47:27.071 13:09:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:47:27.071 13:09:46 -- common/autotest_common.sh@10 -- # set +x 00:47:27.071 13:09:46 -- host/discovery_remove_ifc.sh@29 -- # sort 00:47:27.071 13:09:46 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:47:27.071 13:09:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:47:27.071 13:09:46 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:47:27.071 13:09:46 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:47:28.020 13:09:47 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:47:28.020 13:09:47 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:47:28.020 13:09:47 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:47:28.020 13:09:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:47:28.020 13:09:47 -- common/autotest_common.sh@10 -- # set +x 00:47:28.020 13:09:47 -- host/discovery_remove_ifc.sh@29 -- # sort 00:47:28.020 13:09:47 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:47:28.020 13:09:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:47:28.020 [2024-07-22 13:09:47.415486] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:47:28.020 [2024-07-22 13:09:47.415576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:47:28.020 [2024-07-22 13:09:47.415592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:28.020 [2024-07-22 13:09:47.415603] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:47:28.020 [2024-07-22 13:09:47.415612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:28.020 [2024-07-22 13:09:47.415621] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:47:28.020 [2024-07-22 13:09:47.415631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:28.020 [2024-07-22 13:09:47.415641] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:47:28.020 [2024-07-22 13:09:47.415649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:28.020 [2024-07-22 13:09:47.415659] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:47:28.020 [2024-07-22 13:09:47.415667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:28.020 [2024-07-22 13:09:47.415676] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e04c0 is same with the state(5) to be set 00:47:28.020 [2024-07-22 13:09:47.425498] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11e04c0 (9): Bad file descriptor 00:47:28.020 13:09:47 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:47:28.020 13:09:47 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:47:28.020 [2024-07-22 13:09:47.435520] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:47:29.392 13:09:48 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:47:29.392 13:09:48 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:47:29.392 13:09:48 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:47:29.392 13:09:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:47:29.392 13:09:48 -- common/autotest_common.sh@10 -- # set +x 00:47:29.392 13:09:48 -- host/discovery_remove_ifc.sh@29 -- # sort 00:47:29.392 13:09:48 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:47:29.392 [2024-07-22 13:09:48.491249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:47:30.325 [2024-07-22 13:09:49.517196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:47:30.325 [2024-07-22 13:09:49.517326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11e04c0 with addr=10.0.0.2, port=4420 00:47:30.325 [2024-07-22 13:09:49.517363] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e04c0 is same with the state(5) to be set 00:47:30.325 [2024-07-22 13:09:49.517420] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:47:30.325 [2024-07-22 13:09:49.517444] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:47:30.325 [2024-07-22 13:09:49.517463] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:47:30.325 [2024-07-22 13:09:49.517484] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:47:30.325 [2024-07-22 13:09:49.518341] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11e04c0 (9): Bad file descriptor 00:47:30.325 [2024-07-22 13:09:49.518405] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:47:30.325 [2024-07-22 13:09:49.518458] bdev_nvme.c:6510:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:47:30.325 [2024-07-22 13:09:49.518531] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:47:30.325 [2024-07-22 13:09:49.518562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:30.325 [2024-07-22 13:09:49.518622] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:47:30.325 [2024-07-22 13:09:49.518645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:30.325 [2024-07-22 13:09:49.518668] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:47:30.325 [2024-07-22 13:09:49.518690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:30.325 [2024-07-22 13:09:49.518713] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:47:30.325 [2024-07-22 13:09:49.518733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:30.325 [2024-07-22 13:09:49.518756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:47:30.325 [2024-07-22 13:09:49.518777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:30.325 [2024-07-22 13:09:49.518812] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:47:30.325 [2024-07-22 13:09:49.518860] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11df9b0 (9): Bad file descriptor 00:47:30.325 [2024-07-22 13:09:49.519859] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:47:30.325 [2024-07-22 13:09:49.519895] nvme_ctrlr.c:1136:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:47:30.325 13:09:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:47:30.325 13:09:49 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:47:30.325 13:09:49 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:47:31.258 13:09:50 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:47:31.258 13:09:50 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:47:31.258 13:09:50 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:47:31.258 13:09:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:47:31.258 13:09:50 -- host/discovery_remove_ifc.sh@29 -- # sort 00:47:31.258 13:09:50 -- common/autotest_common.sh@10 -- # set +x 00:47:31.258 13:09:50 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:47:31.258 13:09:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:47:31.258 13:09:50 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:47:31.258 13:09:50 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:47:31.258 13:09:50 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:47:31.258 13:09:50 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:47:31.258 13:09:50 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:47:31.258 13:09:50 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:47:31.258 13:09:50 -- host/discovery_remove_ifc.sh@29 -- # sort 00:47:31.258 13:09:50 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:47:31.258 13:09:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:47:31.258 13:09:50 -- common/autotest_common.sh@10 -- # set +x 00:47:31.258 13:09:50 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:47:31.258 13:09:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:47:31.516 13:09:50 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:47:31.516 13:09:50 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:47:32.448 [2024-07-22 13:09:51.527524] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:47:32.448 [2024-07-22 13:09:51.527576] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:47:32.448 [2024-07-22 13:09:51.527593] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:47:32.448 [2024-07-22 13:09:51.613672] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:47:32.448 [2024-07-22 13:09:51.668657] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:47:32.448 [2024-07-22 13:09:51.668699] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:47:32.448 [2024-07-22 13:09:51.668721] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:47:32.448 [2024-07-22 13:09:51.668736] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:47:32.448 [2024-07-22 13:09:51.668744] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:47:32.448 [2024-07-22 13:09:51.676109] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1222cc0 was disconnected and freed. delete nvme_qpair. 00:47:32.448 13:09:51 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:47:32.448 13:09:51 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:47:32.448 13:09:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:47:32.448 13:09:51 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:47:32.448 13:09:51 -- host/discovery_remove_ifc.sh@29 -- # sort 00:47:32.448 13:09:51 -- common/autotest_common.sh@10 -- # set +x 00:47:32.448 13:09:51 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:47:32.448 13:09:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:47:32.448 13:09:51 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:47:32.448 13:09:51 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:47:32.448 13:09:51 -- host/discovery_remove_ifc.sh@90 -- # killprocess 96066 00:47:32.448 13:09:51 -- common/autotest_common.sh@926 -- # '[' -z 96066 ']' 00:47:32.448 13:09:51 -- common/autotest_common.sh@930 -- # kill -0 96066 00:47:32.448 13:09:51 -- common/autotest_common.sh@931 -- # uname 00:47:32.448 13:09:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:47:32.448 13:09:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 96066 00:47:32.448 killing process with pid 96066 00:47:32.448 13:09:51 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:47:32.448 13:09:51 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:47:32.448 13:09:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 96066' 00:47:32.448 13:09:51 -- common/autotest_common.sh@945 -- # kill 96066 00:47:32.448 13:09:51 -- common/autotest_common.sh@950 -- # wait 96066 00:47:32.706 13:09:51 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:47:32.706 13:09:51 -- nvmf/common.sh@476 -- # nvmfcleanup 00:47:32.706 13:09:51 -- nvmf/common.sh@116 -- # sync 00:47:32.706 13:09:52 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:47:32.706 13:09:52 -- nvmf/common.sh@119 -- # set +e 00:47:32.706 13:09:52 -- nvmf/common.sh@120 -- # for i in {1..20} 00:47:32.706 13:09:52 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:47:32.706 rmmod nvme_tcp 00:47:32.706 rmmod nvme_fabrics 00:47:32.706 rmmod nvme_keyring 00:47:32.706 13:09:52 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:47:32.706 13:09:52 -- nvmf/common.sh@123 -- # set -e 00:47:32.706 13:09:52 -- nvmf/common.sh@124 -- # return 0 00:47:32.706 13:09:52 -- nvmf/common.sh@477 -- # '[' -n 96014 ']' 00:47:32.706 13:09:52 -- nvmf/common.sh@478 -- # killprocess 96014 00:47:32.706 13:09:52 -- common/autotest_common.sh@926 -- # '[' -z 96014 ']' 00:47:32.706 13:09:52 -- common/autotest_common.sh@930 -- # kill -0 96014 00:47:32.706 13:09:52 -- common/autotest_common.sh@931 -- # uname 00:47:32.706 13:09:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:47:32.706 13:09:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 96014 00:47:32.706 killing process with pid 96014 00:47:32.706 13:09:52 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:47:32.706 13:09:52 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:47:32.706 13:09:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 96014' 00:47:32.706 13:09:52 -- common/autotest_common.sh@945 -- # kill 96014 00:47:32.706 13:09:52 -- common/autotest_common.sh@950 -- # wait 96014 00:47:32.965 13:09:52 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:47:32.965 13:09:52 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:47:32.965 13:09:52 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:47:32.965 13:09:52 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:47:32.965 13:09:52 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:47:32.965 13:09:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:47:32.965 13:09:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:47:32.965 13:09:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:47:32.965 13:09:52 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:47:32.965 00:47:32.965 real 0m14.068s 00:47:32.965 user 0m24.119s 00:47:32.965 sys 0m1.502s 00:47:32.965 13:09:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:47:32.965 13:09:52 -- common/autotest_common.sh@10 -- # set +x 00:47:32.965 ************************************ 00:47:32.965 END TEST nvmf_discovery_remove_ifc 00:47:32.965 ************************************ 00:47:33.236 13:09:52 -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:47:33.236 13:09:52 -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:47:33.236 13:09:52 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:47:33.236 13:09:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:47:33.236 13:09:52 -- common/autotest_common.sh@10 -- # set +x 00:47:33.236 ************************************ 00:47:33.236 START TEST nvmf_digest 00:47:33.236 ************************************ 00:47:33.236 13:09:52 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:47:33.236 * Looking for test storage... 00:47:33.236 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:47:33.236 13:09:52 -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:47:33.236 13:09:52 -- nvmf/common.sh@7 -- # uname -s 00:47:33.236 13:09:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:47:33.236 13:09:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:47:33.236 13:09:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:47:33.236 13:09:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:47:33.236 13:09:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:47:33.236 13:09:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:47:33.236 13:09:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:47:33.236 13:09:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:47:33.236 13:09:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:47:33.236 13:09:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:47:33.236 13:09:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:864ae4a3-cd96-439b-8ef2-6dab4d992115 00:47:33.236 13:09:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=864ae4a3-cd96-439b-8ef2-6dab4d992115 00:47:33.236 13:09:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:47:33.236 13:09:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:47:33.236 13:09:52 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:47:33.236 13:09:52 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:47:33.236 13:09:52 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:47:33.236 13:09:52 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:47:33.236 13:09:52 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:47:33.236 13:09:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:33.236 13:09:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:33.236 13:09:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:33.236 13:09:52 -- paths/export.sh@5 -- # export PATH 00:47:33.236 13:09:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:33.236 13:09:52 -- nvmf/common.sh@46 -- # : 0 00:47:33.236 13:09:52 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:47:33.236 13:09:52 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:47:33.236 13:09:52 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:47:33.236 13:09:52 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:47:33.236 13:09:52 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:47:33.236 13:09:52 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:47:33.236 13:09:52 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:47:33.236 13:09:52 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:47:33.236 13:09:52 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:47:33.236 13:09:52 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:47:33.236 13:09:52 -- host/digest.sh@16 -- # runtime=2 00:47:33.236 13:09:52 -- host/digest.sh@130 -- # [[ tcp != \t\c\p ]] 00:47:33.236 13:09:52 -- host/digest.sh@132 -- # nvmftestinit 00:47:33.236 13:09:52 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:47:33.236 13:09:52 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:47:33.236 13:09:52 -- nvmf/common.sh@436 -- # prepare_net_devs 00:47:33.236 13:09:52 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:47:33.236 13:09:52 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:47:33.236 13:09:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:47:33.236 13:09:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:47:33.236 13:09:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:47:33.236 13:09:52 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:47:33.236 13:09:52 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:47:33.236 13:09:52 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:47:33.236 13:09:52 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:47:33.236 13:09:52 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:47:33.236 13:09:52 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:47:33.236 13:09:52 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:47:33.236 13:09:52 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:47:33.236 13:09:52 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:47:33.236 13:09:52 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:47:33.236 13:09:52 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:47:33.236 13:09:52 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:47:33.236 13:09:52 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:47:33.236 13:09:52 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:47:33.236 13:09:52 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:47:33.236 13:09:52 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:47:33.236 13:09:52 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:47:33.236 13:09:52 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:47:33.236 13:09:52 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:47:33.236 13:09:52 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:47:33.236 Cannot find device "nvmf_tgt_br" 00:47:33.236 13:09:52 -- nvmf/common.sh@154 -- # true 00:47:33.236 13:09:52 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:47:33.236 Cannot find device "nvmf_tgt_br2" 00:47:33.236 13:09:52 -- nvmf/common.sh@155 -- # true 00:47:33.236 13:09:52 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:47:33.236 13:09:52 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:47:33.236 Cannot find device "nvmf_tgt_br" 00:47:33.236 13:09:52 -- nvmf/common.sh@157 -- # true 00:47:33.236 13:09:52 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:47:33.236 Cannot find device "nvmf_tgt_br2" 00:47:33.236 13:09:52 -- nvmf/common.sh@158 -- # true 00:47:33.236 13:09:52 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:47:33.236 13:09:52 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:47:33.506 13:09:52 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:47:33.506 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:47:33.506 13:09:52 -- nvmf/common.sh@161 -- # true 00:47:33.506 13:09:52 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:47:33.506 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:47:33.506 13:09:52 -- nvmf/common.sh@162 -- # true 00:47:33.506 13:09:52 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:47:33.506 13:09:52 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:47:33.506 13:09:52 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:47:33.507 13:09:52 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:47:33.507 13:09:52 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:47:33.507 13:09:52 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:47:33.507 13:09:52 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:47:33.507 13:09:52 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:47:33.507 13:09:52 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:47:33.507 13:09:52 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:47:33.507 13:09:52 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:47:33.507 13:09:52 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:47:33.507 13:09:52 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:47:33.507 13:09:52 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:47:33.507 13:09:52 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:47:33.507 13:09:52 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:47:33.507 13:09:52 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:47:33.507 13:09:52 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:47:33.507 13:09:52 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:47:33.507 13:09:52 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:47:33.507 13:09:52 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:47:33.507 13:09:52 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:47:33.507 13:09:52 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:47:33.507 13:09:52 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:47:33.507 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:47:33.507 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:47:33.507 00:47:33.507 --- 10.0.0.2 ping statistics --- 00:47:33.507 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:33.507 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:47:33.507 13:09:52 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:47:33.507 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:47:33.507 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:47:33.507 00:47:33.507 --- 10.0.0.3 ping statistics --- 00:47:33.507 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:33.507 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:47:33.507 13:09:52 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:47:33.507 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:47:33.507 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:47:33.507 00:47:33.507 --- 10.0.0.1 ping statistics --- 00:47:33.507 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:33.507 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:47:33.507 13:09:52 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:47:33.507 13:09:52 -- nvmf/common.sh@421 -- # return 0 00:47:33.507 13:09:52 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:47:33.507 13:09:52 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:47:33.507 13:09:52 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:47:33.507 13:09:52 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:47:33.507 13:09:52 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:47:33.507 13:09:52 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:47:33.507 13:09:52 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:47:33.507 13:09:52 -- host/digest.sh@134 -- # trap cleanup SIGINT SIGTERM EXIT 00:47:33.507 13:09:52 -- host/digest.sh@135 -- # run_test nvmf_digest_clean run_digest 00:47:33.507 13:09:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:47:33.507 13:09:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:47:33.507 13:09:52 -- common/autotest_common.sh@10 -- # set +x 00:47:33.507 ************************************ 00:47:33.507 START TEST nvmf_digest_clean 00:47:33.507 ************************************ 00:47:33.507 13:09:52 -- common/autotest_common.sh@1104 -- # run_digest 00:47:33.507 13:09:52 -- host/digest.sh@119 -- # nvmfappstart --wait-for-rpc 00:47:33.507 13:09:52 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:47:33.507 13:09:52 -- common/autotest_common.sh@712 -- # xtrace_disable 00:47:33.507 13:09:52 -- common/autotest_common.sh@10 -- # set +x 00:47:33.507 13:09:52 -- nvmf/common.sh@469 -- # nvmfpid=96480 00:47:33.507 13:09:52 -- nvmf/common.sh@470 -- # waitforlisten 96480 00:47:33.507 13:09:52 -- common/autotest_common.sh@819 -- # '[' -z 96480 ']' 00:47:33.507 13:09:52 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:47:33.507 13:09:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:33.507 13:09:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:47:33.507 13:09:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:33.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:33.507 13:09:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:47:33.507 13:09:52 -- common/autotest_common.sh@10 -- # set +x 00:47:33.507 [2024-07-22 13:09:52.924817] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:47:33.507 [2024-07-22 13:09:52.924929] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:47:33.768 [2024-07-22 13:09:53.064392] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:33.768 [2024-07-22 13:09:53.141923] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:47:33.768 [2024-07-22 13:09:53.142069] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:47:33.768 [2024-07-22 13:09:53.142085] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:47:33.768 [2024-07-22 13:09:53.142094] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:47:33.768 [2024-07-22 13:09:53.142115] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:47:34.705 13:09:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:47:34.705 13:09:53 -- common/autotest_common.sh@852 -- # return 0 00:47:34.705 13:09:53 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:47:34.705 13:09:53 -- common/autotest_common.sh@718 -- # xtrace_disable 00:47:34.705 13:09:53 -- common/autotest_common.sh@10 -- # set +x 00:47:34.705 13:09:53 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:47:34.705 13:09:53 -- host/digest.sh@120 -- # common_target_config 00:47:34.705 13:09:53 -- host/digest.sh@43 -- # rpc_cmd 00:47:34.705 13:09:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:47:34.705 13:09:53 -- common/autotest_common.sh@10 -- # set +x 00:47:34.705 null0 00:47:34.705 [2024-07-22 13:09:54.078221] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:47:34.705 [2024-07-22 13:09:54.102382] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:47:34.705 13:09:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:47:34.705 13:09:54 -- host/digest.sh@122 -- # run_bperf randread 4096 128 00:47:34.705 13:09:54 -- host/digest.sh@77 -- # local rw bs qd 00:47:34.705 13:09:54 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:47:34.705 13:09:54 -- host/digest.sh@80 -- # rw=randread 00:47:34.705 13:09:54 -- host/digest.sh@80 -- # bs=4096 00:47:34.705 13:09:54 -- host/digest.sh@80 -- # qd=128 00:47:34.705 13:09:54 -- host/digest.sh@82 -- # bperfpid=96530 00:47:34.705 13:09:54 -- host/digest.sh@83 -- # waitforlisten 96530 /var/tmp/bperf.sock 00:47:34.705 13:09:54 -- common/autotest_common.sh@819 -- # '[' -z 96530 ']' 00:47:34.705 13:09:54 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:47:34.705 13:09:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:47:34.705 13:09:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:47:34.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:47:34.705 13:09:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:47:34.705 13:09:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:47:34.705 13:09:54 -- common/autotest_common.sh@10 -- # set +x 00:47:34.963 [2024-07-22 13:09:54.155035] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:47:34.963 [2024-07-22 13:09:54.155177] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96530 ] 00:47:34.963 [2024-07-22 13:09:54.291287] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:34.963 [2024-07-22 13:09:54.379570] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:47:35.896 13:09:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:47:35.896 13:09:55 -- common/autotest_common.sh@852 -- # return 0 00:47:35.896 13:09:55 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:47:35.896 13:09:55 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:47:35.896 13:09:55 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:47:36.155 13:09:55 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:47:36.155 13:09:55 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:47:36.413 nvme0n1 00:47:36.413 13:09:55 -- host/digest.sh@91 -- # bperf_py perform_tests 00:47:36.413 13:09:55 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:47:36.671 Running I/O for 2 seconds... 00:47:38.569 00:47:38.569 Latency(us) 00:47:38.569 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:47:38.569 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:47:38.569 nvme0n1 : 2.01 20465.56 79.94 0.00 0.00 6248.68 2815.07 15847.80 00:47:38.569 =================================================================================================================== 00:47:38.569 Total : 20465.56 79.94 0.00 0.00 6248.68 2815.07 15847.80 00:47:38.569 0 00:47:38.569 13:09:57 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:47:38.569 13:09:57 -- host/digest.sh@92 -- # get_accel_stats 00:47:38.569 13:09:57 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:47:38.569 13:09:57 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:47:38.569 | select(.opcode=="crc32c") 00:47:38.569 | "\(.module_name) \(.executed)"' 00:47:38.569 13:09:57 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:47:38.827 13:09:58 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:47:38.827 13:09:58 -- host/digest.sh@93 -- # exp_module=software 00:47:38.827 13:09:58 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:47:38.827 13:09:58 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:47:38.827 13:09:58 -- host/digest.sh@97 -- # killprocess 96530 00:47:38.827 13:09:58 -- common/autotest_common.sh@926 -- # '[' -z 96530 ']' 00:47:38.827 13:09:58 -- common/autotest_common.sh@930 -- # kill -0 96530 00:47:38.827 13:09:58 -- common/autotest_common.sh@931 -- # uname 00:47:38.827 13:09:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:47:38.827 13:09:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 96530 00:47:38.827 killing process with pid 96530 00:47:38.827 Received shutdown signal, test time was about 2.000000 seconds 00:47:38.827 00:47:38.827 Latency(us) 00:47:38.827 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:47:38.827 =================================================================================================================== 00:47:38.827 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:47:38.827 13:09:58 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:47:38.827 13:09:58 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:47:38.827 13:09:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 96530' 00:47:38.827 13:09:58 -- common/autotest_common.sh@945 -- # kill 96530 00:47:38.827 13:09:58 -- common/autotest_common.sh@950 -- # wait 96530 00:47:39.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:47:39.086 13:09:58 -- host/digest.sh@123 -- # run_bperf randread 131072 16 00:47:39.086 13:09:58 -- host/digest.sh@77 -- # local rw bs qd 00:47:39.086 13:09:58 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:47:39.086 13:09:58 -- host/digest.sh@80 -- # rw=randread 00:47:39.086 13:09:58 -- host/digest.sh@80 -- # bs=131072 00:47:39.086 13:09:58 -- host/digest.sh@80 -- # qd=16 00:47:39.086 13:09:58 -- host/digest.sh@82 -- # bperfpid=96619 00:47:39.086 13:09:58 -- host/digest.sh@83 -- # waitforlisten 96619 /var/tmp/bperf.sock 00:47:39.086 13:09:58 -- common/autotest_common.sh@819 -- # '[' -z 96619 ']' 00:47:39.086 13:09:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:47:39.086 13:09:58 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:47:39.086 13:09:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:47:39.086 13:09:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:47:39.086 13:09:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:47:39.086 13:09:58 -- common/autotest_common.sh@10 -- # set +x 00:47:39.086 [2024-07-22 13:09:58.485086] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:47:39.086 [2024-07-22 13:09:58.485199] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96619 ] 00:47:39.086 I/O size of 131072 is greater than zero copy threshold (65536). 00:47:39.086 Zero copy mechanism will not be used. 00:47:39.374 [2024-07-22 13:09:58.624127] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:39.374 [2024-07-22 13:09:58.712102] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:47:40.307 13:09:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:47:40.307 13:09:59 -- common/autotest_common.sh@852 -- # return 0 00:47:40.307 13:09:59 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:47:40.307 13:09:59 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:47:40.307 13:09:59 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:47:40.564 13:09:59 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:47:40.564 13:09:59 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:47:40.822 nvme0n1 00:47:40.822 13:10:00 -- host/digest.sh@91 -- # bperf_py perform_tests 00:47:40.822 13:10:00 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:47:40.822 I/O size of 131072 is greater than zero copy threshold (65536). 00:47:40.822 Zero copy mechanism will not be used. 00:47:40.822 Running I/O for 2 seconds... 00:47:43.351 00:47:43.351 Latency(us) 00:47:43.351 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:47:43.351 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:47:43.351 nvme0n1 : 2.00 9924.48 1240.56 0.00 0.00 1609.49 655.36 10604.92 00:47:43.351 =================================================================================================================== 00:47:43.351 Total : 9924.48 1240.56 0.00 0.00 1609.49 655.36 10604.92 00:47:43.351 0 00:47:43.351 13:10:02 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:47:43.351 13:10:02 -- host/digest.sh@92 -- # get_accel_stats 00:47:43.351 13:10:02 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:47:43.351 13:10:02 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:47:43.351 | select(.opcode=="crc32c") 00:47:43.351 | "\(.module_name) \(.executed)"' 00:47:43.351 13:10:02 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:47:43.351 13:10:02 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:47:43.351 13:10:02 -- host/digest.sh@93 -- # exp_module=software 00:47:43.351 13:10:02 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:47:43.351 13:10:02 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:47:43.351 13:10:02 -- host/digest.sh@97 -- # killprocess 96619 00:47:43.351 13:10:02 -- common/autotest_common.sh@926 -- # '[' -z 96619 ']' 00:47:43.351 13:10:02 -- common/autotest_common.sh@930 -- # kill -0 96619 00:47:43.351 13:10:02 -- common/autotest_common.sh@931 -- # uname 00:47:43.351 13:10:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:47:43.351 13:10:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 96619 00:47:43.351 killing process with pid 96619 00:47:43.351 Received shutdown signal, test time was about 2.000000 seconds 00:47:43.351 00:47:43.351 Latency(us) 00:47:43.351 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:47:43.351 =================================================================================================================== 00:47:43.351 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:47:43.351 13:10:02 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:47:43.351 13:10:02 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:47:43.351 13:10:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 96619' 00:47:43.351 13:10:02 -- common/autotest_common.sh@945 -- # kill 96619 00:47:43.351 13:10:02 -- common/autotest_common.sh@950 -- # wait 96619 00:47:43.351 13:10:02 -- host/digest.sh@124 -- # run_bperf randwrite 4096 128 00:47:43.351 13:10:02 -- host/digest.sh@77 -- # local rw bs qd 00:47:43.351 13:10:02 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:47:43.351 13:10:02 -- host/digest.sh@80 -- # rw=randwrite 00:47:43.351 13:10:02 -- host/digest.sh@80 -- # bs=4096 00:47:43.351 13:10:02 -- host/digest.sh@80 -- # qd=128 00:47:43.351 13:10:02 -- host/digest.sh@82 -- # bperfpid=96705 00:47:43.351 13:10:02 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:47:43.351 13:10:02 -- host/digest.sh@83 -- # waitforlisten 96705 /var/tmp/bperf.sock 00:47:43.351 13:10:02 -- common/autotest_common.sh@819 -- # '[' -z 96705 ']' 00:47:43.351 13:10:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:47:43.351 13:10:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:47:43.351 13:10:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:47:43.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:47:43.351 13:10:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:47:43.351 13:10:02 -- common/autotest_common.sh@10 -- # set +x 00:47:43.351 [2024-07-22 13:10:02.763931] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:47:43.351 [2024-07-22 13:10:02.764026] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96705 ] 00:47:43.609 [2024-07-22 13:10:02.903505] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:43.609 [2024-07-22 13:10:02.964792] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:47:44.541 13:10:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:47:44.541 13:10:03 -- common/autotest_common.sh@852 -- # return 0 00:47:44.541 13:10:03 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:47:44.541 13:10:03 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:47:44.541 13:10:03 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:47:44.799 13:10:04 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:47:44.799 13:10:04 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:47:45.056 nvme0n1 00:47:45.056 13:10:04 -- host/digest.sh@91 -- # bperf_py perform_tests 00:47:45.056 13:10:04 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:47:45.056 Running I/O for 2 seconds... 00:47:46.986 00:47:46.986 Latency(us) 00:47:46.986 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:47:46.986 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:47:46.986 nvme0n1 : 2.00 27352.66 106.85 0.00 0.00 4674.78 1802.24 15252.01 00:47:46.986 =================================================================================================================== 00:47:46.986 Total : 27352.66 106.85 0.00 0.00 4674.78 1802.24 15252.01 00:47:46.986 0 00:47:46.986 13:10:06 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:47:46.986 13:10:06 -- host/digest.sh@92 -- # get_accel_stats 00:47:46.986 13:10:06 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:47:46.986 13:10:06 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:47:46.986 13:10:06 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:47:46.986 | select(.opcode=="crc32c") 00:47:46.986 | "\(.module_name) \(.executed)"' 00:47:47.243 13:10:06 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:47:47.243 13:10:06 -- host/digest.sh@93 -- # exp_module=software 00:47:47.243 13:10:06 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:47:47.243 13:10:06 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:47:47.243 13:10:06 -- host/digest.sh@97 -- # killprocess 96705 00:47:47.243 13:10:06 -- common/autotest_common.sh@926 -- # '[' -z 96705 ']' 00:47:47.243 13:10:06 -- common/autotest_common.sh@930 -- # kill -0 96705 00:47:47.243 13:10:06 -- common/autotest_common.sh@931 -- # uname 00:47:47.243 13:10:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:47:47.243 13:10:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 96705 00:47:47.243 killing process with pid 96705 00:47:47.243 Received shutdown signal, test time was about 2.000000 seconds 00:47:47.243 00:47:47.243 Latency(us) 00:47:47.243 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:47:47.244 =================================================================================================================== 00:47:47.244 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:47:47.244 13:10:06 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:47:47.244 13:10:06 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:47:47.244 13:10:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 96705' 00:47:47.244 13:10:06 -- common/autotest_common.sh@945 -- # kill 96705 00:47:47.244 13:10:06 -- common/autotest_common.sh@950 -- # wait 96705 00:47:47.501 13:10:06 -- host/digest.sh@125 -- # run_bperf randwrite 131072 16 00:47:47.501 13:10:06 -- host/digest.sh@77 -- # local rw bs qd 00:47:47.501 13:10:06 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:47:47.501 13:10:06 -- host/digest.sh@80 -- # rw=randwrite 00:47:47.501 13:10:06 -- host/digest.sh@80 -- # bs=131072 00:47:47.501 13:10:06 -- host/digest.sh@80 -- # qd=16 00:47:47.501 13:10:06 -- host/digest.sh@82 -- # bperfpid=96799 00:47:47.501 13:10:06 -- host/digest.sh@83 -- # waitforlisten 96799 /var/tmp/bperf.sock 00:47:47.501 13:10:06 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:47:47.501 13:10:06 -- common/autotest_common.sh@819 -- # '[' -z 96799 ']' 00:47:47.501 13:10:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:47:47.501 13:10:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:47:47.501 13:10:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:47:47.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:47:47.501 13:10:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:47:47.501 13:10:06 -- common/autotest_common.sh@10 -- # set +x 00:47:47.501 [2024-07-22 13:10:06.909710] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:47:47.501 [2024-07-22 13:10:06.910021] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96799 ] 00:47:47.501 I/O size of 131072 is greater than zero copy threshold (65536). 00:47:47.501 Zero copy mechanism will not be used. 00:47:47.759 [2024-07-22 13:10:07.047781] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:47.759 [2024-07-22 13:10:07.114295] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:47:48.691 13:10:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:47:48.691 13:10:07 -- common/autotest_common.sh@852 -- # return 0 00:47:48.691 13:10:07 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:47:48.691 13:10:07 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:47:48.691 13:10:07 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:47:48.949 13:10:08 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:47:48.949 13:10:08 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:47:49.206 nvme0n1 00:47:49.206 13:10:08 -- host/digest.sh@91 -- # bperf_py perform_tests 00:47:49.206 13:10:08 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:47:49.206 I/O size of 131072 is greater than zero copy threshold (65536). 00:47:49.206 Zero copy mechanism will not be used. 00:47:49.206 Running I/O for 2 seconds... 00:47:51.747 00:47:51.747 Latency(us) 00:47:51.747 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:47:51.747 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:47:51.747 nvme0n1 : 2.00 8665.29 1083.16 0.00 0.00 1842.24 1504.35 3902.37 00:47:51.747 =================================================================================================================== 00:47:51.747 Total : 8665.29 1083.16 0.00 0.00 1842.24 1504.35 3902.37 00:47:51.747 0 00:47:51.747 13:10:10 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:47:51.747 13:10:10 -- host/digest.sh@92 -- # get_accel_stats 00:47:51.747 13:10:10 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:47:51.747 13:10:10 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:47:51.747 13:10:10 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:47:51.747 | select(.opcode=="crc32c") 00:47:51.747 | "\(.module_name) \(.executed)"' 00:47:51.747 13:10:10 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:47:51.747 13:10:10 -- host/digest.sh@93 -- # exp_module=software 00:47:51.747 13:10:10 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:47:51.747 13:10:10 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:47:51.747 13:10:10 -- host/digest.sh@97 -- # killprocess 96799 00:47:51.747 13:10:10 -- common/autotest_common.sh@926 -- # '[' -z 96799 ']' 00:47:51.747 13:10:10 -- common/autotest_common.sh@930 -- # kill -0 96799 00:47:51.747 13:10:10 -- common/autotest_common.sh@931 -- # uname 00:47:51.747 13:10:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:47:51.747 13:10:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 96799 00:47:51.747 13:10:10 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:47:51.747 13:10:10 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:47:51.747 killing process with pid 96799 00:47:51.747 13:10:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 96799' 00:47:51.747 13:10:10 -- common/autotest_common.sh@945 -- # kill 96799 00:47:51.747 Received shutdown signal, test time was about 2.000000 seconds 00:47:51.747 00:47:51.747 Latency(us) 00:47:51.747 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:47:51.747 =================================================================================================================== 00:47:51.747 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:47:51.747 13:10:10 -- common/autotest_common.sh@950 -- # wait 96799 00:47:51.747 13:10:11 -- host/digest.sh@126 -- # killprocess 96480 00:47:51.747 13:10:11 -- common/autotest_common.sh@926 -- # '[' -z 96480 ']' 00:47:51.747 13:10:11 -- common/autotest_common.sh@930 -- # kill -0 96480 00:47:51.747 13:10:11 -- common/autotest_common.sh@931 -- # uname 00:47:51.747 13:10:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:47:51.748 13:10:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 96480 00:47:51.748 13:10:11 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:47:51.748 13:10:11 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:47:51.748 killing process with pid 96480 00:47:51.748 13:10:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 96480' 00:47:51.748 13:10:11 -- common/autotest_common.sh@945 -- # kill 96480 00:47:51.748 13:10:11 -- common/autotest_common.sh@950 -- # wait 96480 00:47:52.006 00:47:52.006 real 0m18.378s 00:47:52.006 user 0m34.804s 00:47:52.006 sys 0m4.698s 00:47:52.006 13:10:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:47:52.006 13:10:11 -- common/autotest_common.sh@10 -- # set +x 00:47:52.006 ************************************ 00:47:52.006 END TEST nvmf_digest_clean 00:47:52.006 ************************************ 00:47:52.006 13:10:11 -- host/digest.sh@136 -- # run_test nvmf_digest_error run_digest_error 00:47:52.006 13:10:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:47:52.006 13:10:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:47:52.006 13:10:11 -- common/autotest_common.sh@10 -- # set +x 00:47:52.006 ************************************ 00:47:52.006 START TEST nvmf_digest_error 00:47:52.006 ************************************ 00:47:52.006 13:10:11 -- common/autotest_common.sh@1104 -- # run_digest_error 00:47:52.006 13:10:11 -- host/digest.sh@101 -- # nvmfappstart --wait-for-rpc 00:47:52.006 13:10:11 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:47:52.006 13:10:11 -- common/autotest_common.sh@712 -- # xtrace_disable 00:47:52.006 13:10:11 -- common/autotest_common.sh@10 -- # set +x 00:47:52.006 13:10:11 -- nvmf/common.sh@469 -- # nvmfpid=96909 00:47:52.006 13:10:11 -- nvmf/common.sh@470 -- # waitforlisten 96909 00:47:52.006 13:10:11 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:47:52.006 13:10:11 -- common/autotest_common.sh@819 -- # '[' -z 96909 ']' 00:47:52.006 13:10:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:52.006 13:10:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:47:52.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:52.006 13:10:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:52.006 13:10:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:47:52.006 13:10:11 -- common/autotest_common.sh@10 -- # set +x 00:47:52.006 [2024-07-22 13:10:11.357268] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:47:52.006 [2024-07-22 13:10:11.357356] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:47:52.264 [2024-07-22 13:10:11.495683] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:52.264 [2024-07-22 13:10:11.556666] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:47:52.264 [2024-07-22 13:10:11.556827] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:47:52.264 [2024-07-22 13:10:11.556840] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:47:52.265 [2024-07-22 13:10:11.556864] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:47:52.265 [2024-07-22 13:10:11.556887] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:47:53.199 13:10:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:47:53.199 13:10:12 -- common/autotest_common.sh@852 -- # return 0 00:47:53.199 13:10:12 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:47:53.199 13:10:12 -- common/autotest_common.sh@718 -- # xtrace_disable 00:47:53.199 13:10:12 -- common/autotest_common.sh@10 -- # set +x 00:47:53.199 13:10:12 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:47:53.199 13:10:12 -- host/digest.sh@103 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:47:53.199 13:10:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:47:53.199 13:10:12 -- common/autotest_common.sh@10 -- # set +x 00:47:53.199 [2024-07-22 13:10:12.377428] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:47:53.199 13:10:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:47:53.199 13:10:12 -- host/digest.sh@104 -- # common_target_config 00:47:53.199 13:10:12 -- host/digest.sh@43 -- # rpc_cmd 00:47:53.199 13:10:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:47:53.199 13:10:12 -- common/autotest_common.sh@10 -- # set +x 00:47:53.199 null0 00:47:53.199 [2024-07-22 13:10:12.479709] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:47:53.199 [2024-07-22 13:10:12.503822] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:47:53.199 13:10:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:47:53.199 13:10:12 -- host/digest.sh@107 -- # run_bperf_err randread 4096 128 00:47:53.199 13:10:12 -- host/digest.sh@54 -- # local rw bs qd 00:47:53.199 13:10:12 -- host/digest.sh@56 -- # rw=randread 00:47:53.199 13:10:12 -- host/digest.sh@56 -- # bs=4096 00:47:53.199 13:10:12 -- host/digest.sh@56 -- # qd=128 00:47:53.199 13:10:12 -- host/digest.sh@58 -- # bperfpid=96953 00:47:53.199 13:10:12 -- host/digest.sh@60 -- # waitforlisten 96953 /var/tmp/bperf.sock 00:47:53.199 13:10:12 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:47:53.199 13:10:12 -- common/autotest_common.sh@819 -- # '[' -z 96953 ']' 00:47:53.199 13:10:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:47:53.199 13:10:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:47:53.199 13:10:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:47:53.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:47:53.199 13:10:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:47:53.199 13:10:12 -- common/autotest_common.sh@10 -- # set +x 00:47:53.199 [2024-07-22 13:10:12.561294] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:47:53.199 [2024-07-22 13:10:12.561388] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96953 ] 00:47:53.456 [2024-07-22 13:10:12.700467] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:53.456 [2024-07-22 13:10:12.775152] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:47:54.389 13:10:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:47:54.389 13:10:13 -- common/autotest_common.sh@852 -- # return 0 00:47:54.389 13:10:13 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:47:54.389 13:10:13 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:47:54.389 13:10:13 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:47:54.389 13:10:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:47:54.389 13:10:13 -- common/autotest_common.sh@10 -- # set +x 00:47:54.390 13:10:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:47:54.390 13:10:13 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:47:54.390 13:10:13 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:47:54.956 nvme0n1 00:47:54.956 13:10:14 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:47:54.956 13:10:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:47:54.956 13:10:14 -- common/autotest_common.sh@10 -- # set +x 00:47:54.956 13:10:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:47:54.956 13:10:14 -- host/digest.sh@69 -- # bperf_py perform_tests 00:47:54.956 13:10:14 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:47:54.956 Running I/O for 2 seconds... 00:47:54.956 [2024-07-22 13:10:14.234217] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:54.956 [2024-07-22 13:10:14.234286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:11467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:54.956 [2024-07-22 13:10:14.234317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:54.956 [2024-07-22 13:10:14.247906] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:54.956 [2024-07-22 13:10:14.247959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:54.956 [2024-07-22 13:10:14.247986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:54.956 [2024-07-22 13:10:14.260992] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:54.956 [2024-07-22 13:10:14.261045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:14563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:54.956 [2024-07-22 13:10:14.261073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:54.956 [2024-07-22 13:10:14.275123] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:54.956 [2024-07-22 13:10:14.275184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:23056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:54.956 [2024-07-22 13:10:14.275212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:54.956 [2024-07-22 13:10:14.288275] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:54.956 [2024-07-22 13:10:14.288328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:19886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:54.956 [2024-07-22 13:10:14.288355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:54.956 [2024-07-22 13:10:14.299926] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:54.956 [2024-07-22 13:10:14.299978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:20848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:54.956 [2024-07-22 13:10:14.300006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:54.956 [2024-07-22 13:10:14.310265] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:54.956 [2024-07-22 13:10:14.310317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:54.956 [2024-07-22 13:10:14.310345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:54.956 [2024-07-22 13:10:14.323012] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:54.956 [2024-07-22 13:10:14.323064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:2697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:54.956 [2024-07-22 13:10:14.323091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:54.956 [2024-07-22 13:10:14.335988] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:54.956 [2024-07-22 13:10:14.336041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:23501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:54.956 [2024-07-22 13:10:14.336068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:54.956 [2024-07-22 13:10:14.349555] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:54.956 [2024-07-22 13:10:14.349608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:1 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:54.956 [2024-07-22 13:10:14.349635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:54.956 [2024-07-22 13:10:14.362383] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:54.956 [2024-07-22 13:10:14.362436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:17388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:54.956 [2024-07-22 13:10:14.362464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:54.956 [2024-07-22 13:10:14.375634] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:54.956 [2024-07-22 13:10:14.375699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:5690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:54.956 [2024-07-22 13:10:14.375728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:55.215 [2024-07-22 13:10:14.389633] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:55.215 [2024-07-22 13:10:14.389688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:47 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:55.215 [2024-07-22 13:10:14.389716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:55.215 [2024-07-22 13:10:14.401991] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:55.215 [2024-07-22 13:10:14.402044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:55.215 [2024-07-22 13:10:14.402072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:55.215 [2024-07-22 13:10:14.411760] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:55.215 [2024-07-22 13:10:14.411813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:12802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:55.215 [2024-07-22 13:10:14.411841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:55.215 [2024-07-22 13:10:14.424418] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:55.215 [2024-07-22 13:10:14.424471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:6587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:55.215 [2024-07-22 13:10:14.424499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:55.215 [2024-07-22 13:10:14.434066] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:55.215 [2024-07-22 13:10:14.434119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:3332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:55.215 [2024-07-22 13:10:14.434146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:55.215 [2024-07-22 13:10:14.446593] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:55.215 [2024-07-22 13:10:14.446669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:55.215 [2024-07-22 13:10:14.446683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:55.215 [2024-07-22 13:10:14.459754] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:55.215 [2024-07-22 13:10:14.459807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:23672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:55.215 [2024-07-22 13:10:14.459834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:55.215 [2024-07-22 13:10:14.472701] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:55.215 [2024-07-22 13:10:14.472758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:55.215 [2024-07-22 13:10:14.472787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:55.215 [2024-07-22 13:10:14.485942] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:55.215 [2024-07-22 13:10:14.485997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:17330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:55.215 [2024-07-22 13:10:14.486025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:55.215 [2024-07-22 13:10:14.498664] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:55.215 [2024-07-22 13:10:14.498719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:55.215 [2024-07-22 13:10:14.498732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:55.215 [2024-07-22 13:10:14.511319] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:55.215 [2024-07-22 13:10:14.511370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:5515 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:55.215 [2024-07-22 13:10:14.511397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:55.215 [2024-07-22 13:10:14.524272] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:55.215 [2024-07-22 13:10:14.524328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:22775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:55.215 [2024-07-22 13:10:14.524356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:55.215 [2024-07-22 13:10:14.536921] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:55.215 [2024-07-22 13:10:14.536973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:15051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:55.215 [2024-07-22 13:10:14.537000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:55.215 [2024-07-22 13:10:14.549701] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:55.215 [2024-07-22 13:10:14.549753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:5977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:55.215 [2024-07-22 13:10:14.549780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:55.215 [2024-07-22 13:10:14.562456] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:55.215 [2024-07-22 13:10:14.562508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:15268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:55.215 [2024-07-22 13:10:14.562535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:55.215 [2024-07-22 13:10:14.575629] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:55.215 [2024-07-22 13:10:14.575682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:22098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:55.215 [2024-07-22 13:10:14.575709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:55.215 [2024-07-22 13:10:14.587073] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:55.215 [2024-07-22 13:10:14.587125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:9237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:55.215 [2024-07-22 13:10:14.587163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:55.215 [2024-07-22 13:10:14.600538] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:55.215 [2024-07-22 13:10:14.600606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:55.215 [2024-07-22 13:10:14.600634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:55.215 [2024-07-22 13:10:14.614462] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:55.215 [2024-07-22 13:10:14.614518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:11104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:55.215 [2024-07-22 13:10:14.614546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:55.215 [2024-07-22 13:10:14.626942] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:55.215 [2024-07-22 13:10:14.626995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:21179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:55.215 [2024-07-22 13:10:14.627022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:55.474 [2024-07-22 13:10:14.640536] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:55.474 [2024-07-22 13:10:14.640579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:10880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:55.474 [2024-07-22 13:10:14.640607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:55.474 [2024-07-22 13:10:14.653983] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:55.474 [2024-07-22 13:10:14.654037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:3251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:55.474 [2024-07-22 13:10:14.654065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:55.474 [2024-07-22 13:10:14.665918] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:55.474 [2024-07-22 13:10:14.665972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:2444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:55.474 [2024-07-22 13:10:14.665999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:55.474 [2024-07-22 13:10:14.678304] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:55.474 [2024-07-22 13:10:14.678356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:13111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:55.474 [2024-07-22 13:10:14.678384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:55.474 [2024-07-22 13:10:14.690639] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:55.474 [2024-07-22 13:10:14.690692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:55.474 [2024-07-22 13:10:14.690720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:55.474 [2024-07-22 13:10:14.702755] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:55.474 [2024-07-22 13:10:14.702805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:7033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:55.474 [2024-07-22 13:10:14.702833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:55.474 [2024-07-22 13:10:14.715813] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:55.474 [2024-07-22 13:10:14.715865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:9504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:55.474 [2024-07-22 13:10:14.715893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:55.475 [2024-07-22 13:10:14.728974] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:55.475 [2024-07-22 13:10:14.729029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:55.475 [2024-07-22 13:10:14.729058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:55.475 [2024-07-22 13:10:14.741951] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:55.475 [2024-07-22 13:10:14.742006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:3196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:55.475 [2024-07-22 13:10:14.742035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:55.475 [2024-07-22 13:10:14.752770] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:55.475 [2024-07-22 13:10:14.752823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:9512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:55.475 [2024-07-22 13:10:14.752851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:55.475 [2024-07-22 13:10:14.763823] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:55.475 [2024-07-22 13:10:14.763875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:55.475 [2024-07-22 13:10:14.763903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:55.475 [2024-07-22 13:10:14.773364] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:55.475 [2024-07-22 13:10:14.773416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:10495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:55.475 [2024-07-22 13:10:14.773443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:55.475 [2024-07-22 13:10:14.784168] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:55.475 [2024-07-22 13:10:14.784220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:25060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:55.475 [2024-07-22 13:10:14.784247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:55.475 [2024-07-22 13:10:14.795270] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:55.475 [2024-07-22 13:10:14.795323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:3137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:55.475 [2024-07-22 13:10:14.795351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:55.475 [2024-07-22 13:10:14.804288] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:55.475 [2024-07-22 13:10:14.804339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:4966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:55.475 [2024-07-22 13:10:14.804366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:55.475 [2024-07-22 13:10:14.814520] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:55.475 [2024-07-22 13:10:14.814573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:25288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:55.475 [2024-07-22 13:10:14.814609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:55.475 [2024-07-22 13:10:14.825583] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:55.475 [2024-07-22 13:10:14.825637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:16414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:55.475 [2024-07-22 13:10:14.825665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:55.475 [2024-07-22 13:10:14.838401] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:55.475 [2024-07-22 13:10:14.838454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:2548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:55.475 [2024-07-22 13:10:14.838481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:55.475 [2024-07-22 13:10:14.852013] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:55.475 [2024-07-22 13:10:14.852066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:7422 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:55.475 [2024-07-22 13:10:14.852093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:55.475 [2024-07-22 13:10:14.865022] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:55.475 [2024-07-22 13:10:14.865091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:6563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:55.475 [2024-07-22 13:10:14.865118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:55.475 [2024-07-22 13:10:14.877697] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:55.475 [2024-07-22 13:10:14.877750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:55.475 [2024-07-22 13:10:14.877778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:55.475 [2024-07-22 13:10:14.890396] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:55.475 [2024-07-22 13:10:14.890450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:24606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:55.475 [2024-07-22 13:10:14.890477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:55.733 [2024-07-22 13:10:14.903500] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:55.733 [2024-07-22 13:10:14.903557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:18894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:55.733 [2024-07-22 13:10:14.903585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:55.733 [2024-07-22 13:10:14.913862] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:55.733 [2024-07-22 13:10:14.913915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:2313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:55.733 [2024-07-22 13:10:14.913943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:55.733 [2024-07-22 13:10:14.925409] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:55.733 [2024-07-22 13:10:14.925462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:55.733 [2024-07-22 13:10:14.925489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:55.733 [2024-07-22 13:10:14.938687] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:55.733 [2024-07-22 13:10:14.938738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:55.733 [2024-07-22 13:10:14.938765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:55.734 [2024-07-22 13:10:14.952199] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:55.734 [2024-07-22 13:10:14.952251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:55.734 [2024-07-22 13:10:14.952278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:55.734 [2024-07-22 13:10:14.961099] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:55.734 [2024-07-22 13:10:14.961175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:11809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:55.734 [2024-07-22 13:10:14.961189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:55.734 [2024-07-22 13:10:14.974483] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:55.734 [2024-07-22 13:10:14.974536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:14464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:55.734 [2024-07-22 13:10:14.974574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:55.734 [2024-07-22 13:10:14.988322] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:55.734 [2024-07-22 13:10:14.988378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:3275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:55.734 [2024-07-22 13:10:14.988406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:55.734 [2024-07-22 13:10:15.001354] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:55.734 [2024-07-22 13:10:15.001409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:7922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:55.734 [2024-07-22 13:10:15.001437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:55.734 [2024-07-22 13:10:15.013642] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:55.734 [2024-07-22 13:10:15.013697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:20929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:55.734 [2024-07-22 13:10:15.013725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:55.734 [2024-07-22 13:10:15.026718] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:55.734 [2024-07-22 13:10:15.026773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:18623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:55.734 [2024-07-22 13:10:15.026787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:55.734 [2024-07-22 13:10:15.038828] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:55.734 [2024-07-22 13:10:15.038884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:16508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:55.734 [2024-07-22 13:10:15.038925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:55.734 [2024-07-22 13:10:15.051164] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:55.734 [2024-07-22 13:10:15.051226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:6063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:55.734 [2024-07-22 13:10:15.051255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:55.734 [2024-07-22 13:10:15.064826] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:55.734 [2024-07-22 13:10:15.064878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:10867 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:55.734 [2024-07-22 13:10:15.064905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:55.734 [2024-07-22 13:10:15.073598] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:55.734 [2024-07-22 13:10:15.073651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:7199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:55.734 [2024-07-22 13:10:15.073678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:55.734 [2024-07-22 13:10:15.086666] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:55.734 [2024-07-22 13:10:15.086718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:15108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:55.734 [2024-07-22 13:10:15.086745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:55.734 [2024-07-22 13:10:15.099480] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:55.734 [2024-07-22 13:10:15.099532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:7281 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:55.734 [2024-07-22 13:10:15.099559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:55.734 [2024-07-22 13:10:15.111893] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:55.734 [2024-07-22 13:10:15.111944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:15981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:55.734 [2024-07-22 13:10:15.111972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:55.734 [2024-07-22 13:10:15.124569] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:55.734 [2024-07-22 13:10:15.124620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:16247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:55.734 [2024-07-22 13:10:15.124648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:55.734 [2024-07-22 13:10:15.137315] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:55.734 [2024-07-22 13:10:15.137367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:16306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:55.734 [2024-07-22 13:10:15.137394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:55.734 [2024-07-22 13:10:15.150873] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:55.734 [2024-07-22 13:10:15.150925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:25061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:55.734 [2024-07-22 13:10:15.150967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:55.992 [2024-07-22 13:10:15.164951] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:55.992 [2024-07-22 13:10:15.165007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:10647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:55.992 [2024-07-22 13:10:15.165035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:55.992 [2024-07-22 13:10:15.178115] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:55.992 [2024-07-22 13:10:15.178179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:55.992 [2024-07-22 13:10:15.178209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:55.992 [2024-07-22 13:10:15.192863] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:55.992 [2024-07-22 13:10:15.192917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:14982 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:55.992 [2024-07-22 13:10:15.192945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:55.992 [2024-07-22 13:10:15.205308] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:55.992 [2024-07-22 13:10:15.205364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:12006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:55.992 [2024-07-22 13:10:15.205393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:55.992 [2024-07-22 13:10:15.214879] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:55.992 [2024-07-22 13:10:15.214917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:14215 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:55.992 [2024-07-22 13:10:15.214945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:55.992 [2024-07-22 13:10:15.225647] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:55.992 [2024-07-22 13:10:15.225701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:25410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:55.992 [2024-07-22 13:10:15.225728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:55.992 [2024-07-22 13:10:15.236391] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:55.992 [2024-07-22 13:10:15.236444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:25134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:55.992 [2024-07-22 13:10:15.236472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:55.992 [2024-07-22 13:10:15.247471] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:55.992 [2024-07-22 13:10:15.247513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:11902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:55.992 [2024-07-22 13:10:15.247542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:55.992 [2024-07-22 13:10:15.258768] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:55.992 [2024-07-22 13:10:15.258823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:23943 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:55.992 [2024-07-22 13:10:15.258852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:55.992 [2024-07-22 13:10:15.273203] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:55.992 [2024-07-22 13:10:15.273257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:19791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:55.992 [2024-07-22 13:10:15.273285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:55.992 [2024-07-22 13:10:15.287078] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:55.992 [2024-07-22 13:10:15.287132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:15883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:55.992 [2024-07-22 13:10:15.287174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:55.992 [2024-07-22 13:10:15.297492] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:55.992 [2024-07-22 13:10:15.297546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:55.992 [2024-07-22 13:10:15.297574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:55.992 [2024-07-22 13:10:15.307273] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:55.992 [2024-07-22 13:10:15.307327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:5261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:55.992 [2024-07-22 13:10:15.307356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:55.992 [2024-07-22 13:10:15.317719] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:55.992 [2024-07-22 13:10:15.317771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:1663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:55.992 [2024-07-22 13:10:15.317799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:55.992 [2024-07-22 13:10:15.329525] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:55.992 [2024-07-22 13:10:15.329577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:3117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:55.992 [2024-07-22 13:10:15.329605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:55.992 [2024-07-22 13:10:15.342468] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:55.992 [2024-07-22 13:10:15.342532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:55.992 [2024-07-22 13:10:15.342560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:55.992 [2024-07-22 13:10:15.356520] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:55.992 [2024-07-22 13:10:15.356572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:19266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:55.992 [2024-07-22 13:10:15.356600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:55.992 [2024-07-22 13:10:15.368529] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:55.992 [2024-07-22 13:10:15.368581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:55.992 [2024-07-22 13:10:15.368609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:55.992 [2024-07-22 13:10:15.381004] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:55.992 [2024-07-22 13:10:15.381056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:11089 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:55.992 [2024-07-22 13:10:15.381084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:55.992 [2024-07-22 13:10:15.392638] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:55.992 [2024-07-22 13:10:15.392691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:31 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:55.992 [2024-07-22 13:10:15.392718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:55.992 [2024-07-22 13:10:15.404959] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:55.992 [2024-07-22 13:10:15.405012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:11258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:55.992 [2024-07-22 13:10:15.405039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:56.251 [2024-07-22 13:10:15.414710] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:56.251 [2024-07-22 13:10:15.414751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:23177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:56.251 [2024-07-22 13:10:15.414781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:56.251 [2024-07-22 13:10:15.427471] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:56.251 [2024-07-22 13:10:15.427527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23785 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:56.251 [2024-07-22 13:10:15.427555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:56.251 [2024-07-22 13:10:15.440685] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:56.251 [2024-07-22 13:10:15.440738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:5709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:56.251 [2024-07-22 13:10:15.440765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:56.251 [2024-07-22 13:10:15.453568] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:56.251 [2024-07-22 13:10:15.453620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:15041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:56.251 [2024-07-22 13:10:15.453648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:56.251 [2024-07-22 13:10:15.466499] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:56.251 [2024-07-22 13:10:15.466552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:56.251 [2024-07-22 13:10:15.466564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:56.251 [2024-07-22 13:10:15.480014] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:56.251 [2024-07-22 13:10:15.480067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:21267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:56.251 [2024-07-22 13:10:15.480094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:56.251 [2024-07-22 13:10:15.492857] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:56.251 [2024-07-22 13:10:15.492909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:17180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:56.251 [2024-07-22 13:10:15.492936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:56.251 [2024-07-22 13:10:15.505871] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:56.251 [2024-07-22 13:10:15.505928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:4939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:56.251 [2024-07-22 13:10:15.505956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:56.251 [2024-07-22 13:10:15.518716] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:56.251 [2024-07-22 13:10:15.518772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:14267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:56.251 [2024-07-22 13:10:15.518800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:56.251 [2024-07-22 13:10:15.529009] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:56.251 [2024-07-22 13:10:15.529062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:17055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:56.251 [2024-07-22 13:10:15.529090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:56.251 [2024-07-22 13:10:15.541742] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:56.251 [2024-07-22 13:10:15.541796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:19642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:56.251 [2024-07-22 13:10:15.541824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:56.251 [2024-07-22 13:10:15.555163] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:56.251 [2024-07-22 13:10:15.555218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:13058 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:56.251 [2024-07-22 13:10:15.555248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:56.251 [2024-07-22 13:10:15.568230] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:56.251 [2024-07-22 13:10:15.568284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:10706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:56.251 [2024-07-22 13:10:15.568312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:56.251 [2024-07-22 13:10:15.581225] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:56.251 [2024-07-22 13:10:15.581278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:13581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:56.251 [2024-07-22 13:10:15.581305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:56.251 [2024-07-22 13:10:15.595076] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:56.251 [2024-07-22 13:10:15.595130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:23424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:56.251 [2024-07-22 13:10:15.595169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:56.251 [2024-07-22 13:10:15.608093] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:56.251 [2024-07-22 13:10:15.608169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:6907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:56.251 [2024-07-22 13:10:15.608183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:56.251 [2024-07-22 13:10:15.619000] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:56.251 [2024-07-22 13:10:15.619052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:9586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:56.251 [2024-07-22 13:10:15.619079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:56.251 [2024-07-22 13:10:15.632151] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:56.251 [2024-07-22 13:10:15.632202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:20108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:56.251 [2024-07-22 13:10:15.632230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:56.251 [2024-07-22 13:10:15.645499] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:56.251 [2024-07-22 13:10:15.645551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:23546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:56.251 [2024-07-22 13:10:15.645578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:56.251 [2024-07-22 13:10:15.654044] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:56.251 [2024-07-22 13:10:15.654095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:1535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:56.251 [2024-07-22 13:10:15.654123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:56.251 [2024-07-22 13:10:15.667237] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:56.251 [2024-07-22 13:10:15.667288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:17534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:56.251 [2024-07-22 13:10:15.667317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:56.510 [2024-07-22 13:10:15.681481] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:56.510 [2024-07-22 13:10:15.681536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:22523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:56.510 [2024-07-22 13:10:15.681563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:56.510 [2024-07-22 13:10:15.693681] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:56.510 [2024-07-22 13:10:15.693734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:20709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:56.510 [2024-07-22 13:10:15.693762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:56.510 [2024-07-22 13:10:15.707063] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:56.510 [2024-07-22 13:10:15.707115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:56.510 [2024-07-22 13:10:15.707142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:56.510 [2024-07-22 13:10:15.719606] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:56.510 [2024-07-22 13:10:15.719660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:56.510 [2024-07-22 13:10:15.719687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:56.510 [2024-07-22 13:10:15.729731] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:56.510 [2024-07-22 13:10:15.729785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:56.510 [2024-07-22 13:10:15.729813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:56.510 [2024-07-22 13:10:15.740180] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:56.510 [2024-07-22 13:10:15.740233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:56.510 [2024-07-22 13:10:15.740260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:56.510 [2024-07-22 13:10:15.750939] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:56.510 [2024-07-22 13:10:15.750990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:4278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:56.510 [2024-07-22 13:10:15.751017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:56.510 [2024-07-22 13:10:15.761110] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:56.510 [2024-07-22 13:10:15.761173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:89 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:56.510 [2024-07-22 13:10:15.761202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:56.510 [2024-07-22 13:10:15.770173] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:56.510 [2024-07-22 13:10:15.770224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:56.510 [2024-07-22 13:10:15.770251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:56.510 [2024-07-22 13:10:15.782283] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:56.510 [2024-07-22 13:10:15.782334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:12775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:56.510 [2024-07-22 13:10:15.782362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:56.510 [2024-07-22 13:10:15.795022] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:56.510 [2024-07-22 13:10:15.795073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:9319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:56.510 [2024-07-22 13:10:15.795101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:56.510 [2024-07-22 13:10:15.807781] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:56.510 [2024-07-22 13:10:15.807833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:24286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:56.510 [2024-07-22 13:10:15.807860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:56.510 [2024-07-22 13:10:15.821664] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:56.510 [2024-07-22 13:10:15.821721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:10488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:56.510 [2024-07-22 13:10:15.821749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:56.510 [2024-07-22 13:10:15.829578] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:56.510 [2024-07-22 13:10:15.829631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:8294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:56.510 [2024-07-22 13:10:15.829659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:56.510 [2024-07-22 13:10:15.842362] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:56.510 [2024-07-22 13:10:15.842415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:18209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:56.510 [2024-07-22 13:10:15.842442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:56.510 [2024-07-22 13:10:15.854901] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:56.510 [2024-07-22 13:10:15.854970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:56.510 [2024-07-22 13:10:15.854998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:56.510 [2024-07-22 13:10:15.867867] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:56.510 [2024-07-22 13:10:15.867920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:9893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:56.510 [2024-07-22 13:10:15.867950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:56.510 [2024-07-22 13:10:15.879795] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:56.510 [2024-07-22 13:10:15.879847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:12323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:56.511 [2024-07-22 13:10:15.879875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:56.511 [2024-07-22 13:10:15.893414] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:56.511 [2024-07-22 13:10:15.893466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:8732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:56.511 [2024-07-22 13:10:15.893494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:56.511 [2024-07-22 13:10:15.906246] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:56.511 [2024-07-22 13:10:15.906297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:7243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:56.511 [2024-07-22 13:10:15.906325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:56.511 [2024-07-22 13:10:15.918735] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:56.511 [2024-07-22 13:10:15.918787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:56.511 [2024-07-22 13:10:15.918815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:56.769 [2024-07-22 13:10:15.931918] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:56.769 [2024-07-22 13:10:15.931963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:2250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:56.769 [2024-07-22 13:10:15.931992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:56.769 [2024-07-22 13:10:15.945411] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:56.769 [2024-07-22 13:10:15.945467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:14399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:56.769 [2024-07-22 13:10:15.945495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:56.769 [2024-07-22 13:10:15.958203] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:56.769 [2024-07-22 13:10:15.958266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:6665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:56.769 [2024-07-22 13:10:15.958294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:56.769 [2024-07-22 13:10:15.971067] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:56.769 [2024-07-22 13:10:15.971120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:56.769 [2024-07-22 13:10:15.971148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:56.769 [2024-07-22 13:10:15.984560] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:56.769 [2024-07-22 13:10:15.984613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:17461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:56.769 [2024-07-22 13:10:15.984641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:56.769 [2024-07-22 13:10:15.997025] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:56.769 [2024-07-22 13:10:15.997079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:18464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:56.769 [2024-07-22 13:10:15.997107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:56.769 [2024-07-22 13:10:16.009559] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:56.769 [2024-07-22 13:10:16.009612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:5549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:56.769 [2024-07-22 13:10:16.009639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:56.769 [2024-07-22 13:10:16.022086] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:56.770 [2024-07-22 13:10:16.022164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:3597 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:56.770 [2024-07-22 13:10:16.022178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:56.770 [2024-07-22 13:10:16.035846] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:56.770 [2024-07-22 13:10:16.035900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:1137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:56.770 [2024-07-22 13:10:16.035928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:56.770 [2024-07-22 13:10:16.050592] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:56.770 [2024-07-22 13:10:16.050697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:6665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:56.770 [2024-07-22 13:10:16.050725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:56.770 [2024-07-22 13:10:16.064070] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:56.770 [2024-07-22 13:10:16.064121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:3736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:56.770 [2024-07-22 13:10:16.064175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:56.770 [2024-07-22 13:10:16.077817] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:56.770 [2024-07-22 13:10:16.077872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:56.770 [2024-07-22 13:10:16.077901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:56.770 [2024-07-22 13:10:16.090444] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:56.770 [2024-07-22 13:10:16.090497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:56.770 [2024-07-22 13:10:16.090524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:56.770 [2024-07-22 13:10:16.103110] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:56.770 [2024-07-22 13:10:16.103178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:9712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:56.770 [2024-07-22 13:10:16.103207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:56.770 [2024-07-22 13:10:16.115832] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:56.770 [2024-07-22 13:10:16.115884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:21855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:56.770 [2024-07-22 13:10:16.115911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:56.770 [2024-07-22 13:10:16.128578] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:56.770 [2024-07-22 13:10:16.128629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:6284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:56.770 [2024-07-22 13:10:16.128657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:56.770 [2024-07-22 13:10:16.142165] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:56.770 [2024-07-22 13:10:16.142218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:6264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:56.770 [2024-07-22 13:10:16.142246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:56.770 [2024-07-22 13:10:16.154918] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:56.770 [2024-07-22 13:10:16.154987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:1405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:56.770 [2024-07-22 13:10:16.155014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:56.770 [2024-07-22 13:10:16.163590] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:56.770 [2024-07-22 13:10:16.163642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:5573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:56.770 [2024-07-22 13:10:16.163669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:56.770 [2024-07-22 13:10:16.176682] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:56.770 [2024-07-22 13:10:16.176746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:6822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:56.770 [2024-07-22 13:10:16.176774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:56.770 [2024-07-22 13:10:16.189798] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:56.770 [2024-07-22 13:10:16.189854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:11082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:56.770 [2024-07-22 13:10:16.189882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:57.028 [2024-07-22 13:10:16.203149] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:57.028 [2024-07-22 13:10:16.203213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:18951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:57.028 [2024-07-22 13:10:16.203242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:57.028 [2024-07-22 13:10:16.214991] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x673760) 00:47:57.028 [2024-07-22 13:10:16.215044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:7358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:57.028 [2024-07-22 13:10:16.215072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:57.028 00:47:57.028 Latency(us) 00:47:57.028 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:47:57.028 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:47:57.028 nvme0n1 : 2.00 20504.68 80.10 0.00 0.00 6236.49 2383.13 19184.17 00:47:57.028 =================================================================================================================== 00:47:57.028 Total : 20504.68 80.10 0.00 0.00 6236.49 2383.13 19184.17 00:47:57.028 0 00:47:57.028 13:10:16 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:47:57.028 13:10:16 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:47:57.028 13:10:16 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:47:57.028 | .driver_specific 00:47:57.028 | .nvme_error 00:47:57.028 | .status_code 00:47:57.028 | .command_transient_transport_error' 00:47:57.028 13:10:16 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:47:57.286 13:10:16 -- host/digest.sh@71 -- # (( 161 > 0 )) 00:47:57.286 13:10:16 -- host/digest.sh@73 -- # killprocess 96953 00:47:57.286 13:10:16 -- common/autotest_common.sh@926 -- # '[' -z 96953 ']' 00:47:57.286 13:10:16 -- common/autotest_common.sh@930 -- # kill -0 96953 00:47:57.286 13:10:16 -- common/autotest_common.sh@931 -- # uname 00:47:57.286 13:10:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:47:57.286 13:10:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 96953 00:47:57.286 13:10:16 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:47:57.286 13:10:16 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:47:57.286 killing process with pid 96953 00:47:57.286 13:10:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 96953' 00:47:57.286 13:10:16 -- common/autotest_common.sh@945 -- # kill 96953 00:47:57.286 Received shutdown signal, test time was about 2.000000 seconds 00:47:57.286 00:47:57.286 Latency(us) 00:47:57.287 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:47:57.287 =================================================================================================================== 00:47:57.287 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:47:57.287 13:10:16 -- common/autotest_common.sh@950 -- # wait 96953 00:47:57.545 13:10:16 -- host/digest.sh@108 -- # run_bperf_err randread 131072 16 00:47:57.545 13:10:16 -- host/digest.sh@54 -- # local rw bs qd 00:47:57.545 13:10:16 -- host/digest.sh@56 -- # rw=randread 00:47:57.545 13:10:16 -- host/digest.sh@56 -- # bs=131072 00:47:57.545 13:10:16 -- host/digest.sh@56 -- # qd=16 00:47:57.545 13:10:16 -- host/digest.sh@58 -- # bperfpid=97043 00:47:57.545 13:10:16 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:47:57.545 13:10:16 -- host/digest.sh@60 -- # waitforlisten 97043 /var/tmp/bperf.sock 00:47:57.545 13:10:16 -- common/autotest_common.sh@819 -- # '[' -z 97043 ']' 00:47:57.545 13:10:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:47:57.545 13:10:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:47:57.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:47:57.545 13:10:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:47:57.545 13:10:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:47:57.545 13:10:16 -- common/autotest_common.sh@10 -- # set +x 00:47:57.545 [2024-07-22 13:10:16.769017] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:47:57.545 [2024-07-22 13:10:16.769127] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97043 ] 00:47:57.545 I/O size of 131072 is greater than zero copy threshold (65536). 00:47:57.545 Zero copy mechanism will not be used. 00:47:57.545 [2024-07-22 13:10:16.907017] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:57.802 [2024-07-22 13:10:16.969761] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:47:58.368 13:10:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:47:58.368 13:10:17 -- common/autotest_common.sh@852 -- # return 0 00:47:58.368 13:10:17 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:47:58.368 13:10:17 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:47:58.635 13:10:17 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:47:58.635 13:10:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:47:58.635 13:10:17 -- common/autotest_common.sh@10 -- # set +x 00:47:58.635 13:10:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:47:58.635 13:10:17 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:47:58.635 13:10:17 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:47:58.908 nvme0n1 00:47:58.908 13:10:18 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:47:58.908 13:10:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:47:58.908 13:10:18 -- common/autotest_common.sh@10 -- # set +x 00:47:58.908 13:10:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:47:58.908 13:10:18 -- host/digest.sh@69 -- # bperf_py perform_tests 00:47:58.908 13:10:18 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:47:58.908 I/O size of 131072 is greater than zero copy threshold (65536). 00:47:58.908 Zero copy mechanism will not be used. 00:47:58.908 Running I/O for 2 seconds... 00:47:58.908 [2024-07-22 13:10:18.302087] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:58.908 [2024-07-22 13:10:18.302173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:58.908 [2024-07-22 13:10:18.302189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:47:58.908 [2024-07-22 13:10:18.305964] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:58.908 [2024-07-22 13:10:18.306017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:58.908 [2024-07-22 13:10:18.306046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:47:58.908 [2024-07-22 13:10:18.309509] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:58.908 [2024-07-22 13:10:18.309575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:58.908 [2024-07-22 13:10:18.309602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:47:58.908 [2024-07-22 13:10:18.313774] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:58.908 [2024-07-22 13:10:18.313826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:58.908 [2024-07-22 13:10:18.313854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:58.908 [2024-07-22 13:10:18.316892] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:58.908 [2024-07-22 13:10:18.316944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:58.908 [2024-07-22 13:10:18.316972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:47:58.908 [2024-07-22 13:10:18.320319] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:58.908 [2024-07-22 13:10:18.320383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:58.908 [2024-07-22 13:10:18.320412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:47:58.908 [2024-07-22 13:10:18.323743] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:58.908 [2024-07-22 13:10:18.323795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:58.908 [2024-07-22 13:10:18.323822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:47:58.908 [2024-07-22 13:10:18.327616] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:58.908 [2024-07-22 13:10:18.327674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:58.908 [2024-07-22 13:10:18.327703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:59.167 [2024-07-22 13:10:18.331658] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.167 [2024-07-22 13:10:18.331700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.167 [2024-07-22 13:10:18.331729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:47:59.167 [2024-07-22 13:10:18.335571] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.168 [2024-07-22 13:10:18.335626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.168 [2024-07-22 13:10:18.335660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:47:59.168 [2024-07-22 13:10:18.338832] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.168 [2024-07-22 13:10:18.338886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.168 [2024-07-22 13:10:18.338914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:47:59.168 [2024-07-22 13:10:18.342511] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.168 [2024-07-22 13:10:18.342562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.168 [2024-07-22 13:10:18.342590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:59.168 [2024-07-22 13:10:18.345513] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.168 [2024-07-22 13:10:18.345572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.168 [2024-07-22 13:10:18.345605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:47:59.168 [2024-07-22 13:10:18.348728] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.168 [2024-07-22 13:10:18.348778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.168 [2024-07-22 13:10:18.348805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:47:59.168 [2024-07-22 13:10:18.352274] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.168 [2024-07-22 13:10:18.352326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.168 [2024-07-22 13:10:18.352354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:47:59.168 [2024-07-22 13:10:18.356218] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.168 [2024-07-22 13:10:18.356270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.168 [2024-07-22 13:10:18.356299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:59.168 [2024-07-22 13:10:18.360044] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.168 [2024-07-22 13:10:18.360098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.168 [2024-07-22 13:10:18.360125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:47:59.168 [2024-07-22 13:10:18.363626] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.168 [2024-07-22 13:10:18.363678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.168 [2024-07-22 13:10:18.363706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:47:59.168 [2024-07-22 13:10:18.367778] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.168 [2024-07-22 13:10:18.367830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.168 [2024-07-22 13:10:18.367858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:47:59.168 [2024-07-22 13:10:18.371574] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.168 [2024-07-22 13:10:18.371626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.168 [2024-07-22 13:10:18.371654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:59.168 [2024-07-22 13:10:18.374409] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.168 [2024-07-22 13:10:18.374459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.168 [2024-07-22 13:10:18.374487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:47:59.168 [2024-07-22 13:10:18.378063] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.168 [2024-07-22 13:10:18.378114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.168 [2024-07-22 13:10:18.378142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:47:59.168 [2024-07-22 13:10:18.381430] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.168 [2024-07-22 13:10:18.381481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.168 [2024-07-22 13:10:18.381509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:47:59.168 [2024-07-22 13:10:18.385362] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.168 [2024-07-22 13:10:18.385415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.168 [2024-07-22 13:10:18.385443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:59.168 [2024-07-22 13:10:18.389136] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.168 [2024-07-22 13:10:18.389199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.168 [2024-07-22 13:10:18.389228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:47:59.168 [2024-07-22 13:10:18.393039] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.168 [2024-07-22 13:10:18.393091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.168 [2024-07-22 13:10:18.393119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:47:59.168 [2024-07-22 13:10:18.396961] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.168 [2024-07-22 13:10:18.397013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.168 [2024-07-22 13:10:18.397040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:47:59.168 [2024-07-22 13:10:18.400748] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.168 [2024-07-22 13:10:18.400799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.168 [2024-07-22 13:10:18.400827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:59.168 [2024-07-22 13:10:18.404385] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.168 [2024-07-22 13:10:18.404438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.168 [2024-07-22 13:10:18.404466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:47:59.168 [2024-07-22 13:10:18.407758] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.168 [2024-07-22 13:10:18.407810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.168 [2024-07-22 13:10:18.407837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:47:59.168 [2024-07-22 13:10:18.411339] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.168 [2024-07-22 13:10:18.411391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.168 [2024-07-22 13:10:18.411419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:47:59.168 [2024-07-22 13:10:18.414852] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.168 [2024-07-22 13:10:18.414905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.168 [2024-07-22 13:10:18.414933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:59.168 [2024-07-22 13:10:18.418187] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.168 [2024-07-22 13:10:18.418236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.168 [2024-07-22 13:10:18.418264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:47:59.168 [2024-07-22 13:10:18.421261] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.168 [2024-07-22 13:10:18.421311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.168 [2024-07-22 13:10:18.421339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:47:59.168 [2024-07-22 13:10:18.424452] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.168 [2024-07-22 13:10:18.424507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.168 [2024-07-22 13:10:18.424536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:47:59.168 [2024-07-22 13:10:18.427561] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.168 [2024-07-22 13:10:18.427611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.168 [2024-07-22 13:10:18.427638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:59.168 [2024-07-22 13:10:18.431073] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.169 [2024-07-22 13:10:18.431122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.169 [2024-07-22 13:10:18.431160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:47:59.169 [2024-07-22 13:10:18.435417] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.169 [2024-07-22 13:10:18.435468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.169 [2024-07-22 13:10:18.435496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:47:59.169 [2024-07-22 13:10:18.438368] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.169 [2024-07-22 13:10:18.438419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.169 [2024-07-22 13:10:18.438446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:47:59.169 [2024-07-22 13:10:18.441649] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.169 [2024-07-22 13:10:18.441700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.169 [2024-07-22 13:10:18.441727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:59.169 [2024-07-22 13:10:18.445466] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.169 [2024-07-22 13:10:18.445517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.169 [2024-07-22 13:10:18.445545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:47:59.169 [2024-07-22 13:10:18.449462] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.169 [2024-07-22 13:10:18.449512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.169 [2024-07-22 13:10:18.449540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:47:59.169 [2024-07-22 13:10:18.453318] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.169 [2024-07-22 13:10:18.453367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.169 [2024-07-22 13:10:18.453395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:47:59.169 [2024-07-22 13:10:18.457655] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.169 [2024-07-22 13:10:18.457696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.169 [2024-07-22 13:10:18.457725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:59.169 [2024-07-22 13:10:18.462630] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.169 [2024-07-22 13:10:18.462687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.169 [2024-07-22 13:10:18.462716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:47:59.169 [2024-07-22 13:10:18.466238] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.169 [2024-07-22 13:10:18.466288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.169 [2024-07-22 13:10:18.466316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:47:59.169 [2024-07-22 13:10:18.469520] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.169 [2024-07-22 13:10:18.469588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.169 [2024-07-22 13:10:18.469616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:47:59.169 [2024-07-22 13:10:18.473224] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.169 [2024-07-22 13:10:18.473277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.169 [2024-07-22 13:10:18.473305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:59.169 [2024-07-22 13:10:18.477031] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.169 [2024-07-22 13:10:18.477084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.169 [2024-07-22 13:10:18.477112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:47:59.169 [2024-07-22 13:10:18.480702] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.169 [2024-07-22 13:10:18.480755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.169 [2024-07-22 13:10:18.480782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:47:59.169 [2024-07-22 13:10:18.484576] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.169 [2024-07-22 13:10:18.484630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.169 [2024-07-22 13:10:18.484657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:47:59.169 [2024-07-22 13:10:18.488792] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.169 [2024-07-22 13:10:18.488844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.169 [2024-07-22 13:10:18.488872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:59.169 [2024-07-22 13:10:18.492599] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.169 [2024-07-22 13:10:18.492652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.169 [2024-07-22 13:10:18.492680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:47:59.169 [2024-07-22 13:10:18.496333] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.169 [2024-07-22 13:10:18.496385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.169 [2024-07-22 13:10:18.496413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:47:59.169 [2024-07-22 13:10:18.500093] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.169 [2024-07-22 13:10:18.500168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.169 [2024-07-22 13:10:18.500182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:47:59.169 [2024-07-22 13:10:18.503973] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.169 [2024-07-22 13:10:18.504025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.169 [2024-07-22 13:10:18.504053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:59.169 [2024-07-22 13:10:18.506705] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.169 [2024-07-22 13:10:18.506757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.169 [2024-07-22 13:10:18.506770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:47:59.169 [2024-07-22 13:10:18.509840] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.169 [2024-07-22 13:10:18.509891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.169 [2024-07-22 13:10:18.509920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:47:59.169 [2024-07-22 13:10:18.513414] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.169 [2024-07-22 13:10:18.513466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.169 [2024-07-22 13:10:18.513495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:47:59.169 [2024-07-22 13:10:18.516305] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.169 [2024-07-22 13:10:18.516358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.169 [2024-07-22 13:10:18.516386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:59.169 [2024-07-22 13:10:18.519645] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.169 [2024-07-22 13:10:18.519697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.169 [2024-07-22 13:10:18.519724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:47:59.169 [2024-07-22 13:10:18.523468] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.169 [2024-07-22 13:10:18.523518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.169 [2024-07-22 13:10:18.523545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:47:59.169 [2024-07-22 13:10:18.527597] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.169 [2024-07-22 13:10:18.527648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.169 [2024-07-22 13:10:18.527676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:47:59.170 [2024-07-22 13:10:18.531095] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.170 [2024-07-22 13:10:18.531172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.170 [2024-07-22 13:10:18.531188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:59.170 [2024-07-22 13:10:18.534660] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.170 [2024-07-22 13:10:18.534713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.170 [2024-07-22 13:10:18.534726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:47:59.170 [2024-07-22 13:10:18.538403] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.170 [2024-07-22 13:10:18.538455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.170 [2024-07-22 13:10:18.538484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:47:59.170 [2024-07-22 13:10:18.541787] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.170 [2024-07-22 13:10:18.541838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.170 [2024-07-22 13:10:18.541866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:47:59.170 [2024-07-22 13:10:18.545350] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.170 [2024-07-22 13:10:18.545403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.170 [2024-07-22 13:10:18.545432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:59.170 [2024-07-22 13:10:18.548967] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.170 [2024-07-22 13:10:18.549019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.170 [2024-07-22 13:10:18.549048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:47:59.170 [2024-07-22 13:10:18.552396] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.170 [2024-07-22 13:10:18.552446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.170 [2024-07-22 13:10:18.552473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:47:59.170 [2024-07-22 13:10:18.556262] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.170 [2024-07-22 13:10:18.556311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.170 [2024-07-22 13:10:18.556338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:47:59.170 [2024-07-22 13:10:18.559764] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.170 [2024-07-22 13:10:18.559816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.170 [2024-07-22 13:10:18.559844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:59.170 [2024-07-22 13:10:18.563434] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.170 [2024-07-22 13:10:18.563485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.170 [2024-07-22 13:10:18.563513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:47:59.170 [2024-07-22 13:10:18.566533] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.170 [2024-07-22 13:10:18.566583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.170 [2024-07-22 13:10:18.566634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:47:59.170 [2024-07-22 13:10:18.570401] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.170 [2024-07-22 13:10:18.570453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.170 [2024-07-22 13:10:18.570481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:47:59.170 [2024-07-22 13:10:18.573762] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.170 [2024-07-22 13:10:18.573815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.170 [2024-07-22 13:10:18.573828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:59.170 [2024-07-22 13:10:18.577652] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.170 [2024-07-22 13:10:18.577704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.170 [2024-07-22 13:10:18.577731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:47:59.170 [2024-07-22 13:10:18.580910] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.170 [2024-07-22 13:10:18.580961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.170 [2024-07-22 13:10:18.580988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:47:59.170 [2024-07-22 13:10:18.584517] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.170 [2024-07-22 13:10:18.584601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.170 [2024-07-22 13:10:18.584623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:47:59.429 [2024-07-22 13:10:18.588577] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.429 [2024-07-22 13:10:18.588618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.429 [2024-07-22 13:10:18.588647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:59.429 [2024-07-22 13:10:18.592237] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.429 [2024-07-22 13:10:18.592288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.430 [2024-07-22 13:10:18.592316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:47:59.430 [2024-07-22 13:10:18.596223] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.430 [2024-07-22 13:10:18.596276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.430 [2024-07-22 13:10:18.596305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:47:59.430 [2024-07-22 13:10:18.599695] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.430 [2024-07-22 13:10:18.599747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.430 [2024-07-22 13:10:18.599775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:47:59.430 [2024-07-22 13:10:18.603284] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.430 [2024-07-22 13:10:18.603336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.430 [2024-07-22 13:10:18.603364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:59.430 [2024-07-22 13:10:18.606859] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.430 [2024-07-22 13:10:18.606897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.430 [2024-07-22 13:10:18.606911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:47:59.430 [2024-07-22 13:10:18.610194] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.430 [2024-07-22 13:10:18.610244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.430 [2024-07-22 13:10:18.610272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:47:59.430 [2024-07-22 13:10:18.613708] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.430 [2024-07-22 13:10:18.613759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.430 [2024-07-22 13:10:18.613787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:47:59.430 [2024-07-22 13:10:18.617586] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.430 [2024-07-22 13:10:18.617638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.430 [2024-07-22 13:10:18.617666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:59.430 [2024-07-22 13:10:18.620948] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.430 [2024-07-22 13:10:18.620998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.430 [2024-07-22 13:10:18.621026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:47:59.430 [2024-07-22 13:10:18.624456] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.430 [2024-07-22 13:10:18.624508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.430 [2024-07-22 13:10:18.624536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:47:59.430 [2024-07-22 13:10:18.628007] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.430 [2024-07-22 13:10:18.628059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.430 [2024-07-22 13:10:18.628087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:47:59.430 [2024-07-22 13:10:18.631999] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.430 [2024-07-22 13:10:18.632052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.430 [2024-07-22 13:10:18.632079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:59.430 [2024-07-22 13:10:18.635703] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.430 [2024-07-22 13:10:18.635755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.430 [2024-07-22 13:10:18.635782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:47:59.430 [2024-07-22 13:10:18.638494] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.430 [2024-07-22 13:10:18.638545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.430 [2024-07-22 13:10:18.638572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:47:59.430 [2024-07-22 13:10:18.642123] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.430 [2024-07-22 13:10:18.642185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.430 [2024-07-22 13:10:18.642214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:47:59.430 [2024-07-22 13:10:18.645468] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.430 [2024-07-22 13:10:18.645520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.430 [2024-07-22 13:10:18.645563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:59.430 [2024-07-22 13:10:18.649129] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.430 [2024-07-22 13:10:18.649206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.430 [2024-07-22 13:10:18.649235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:47:59.430 [2024-07-22 13:10:18.651989] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.430 [2024-07-22 13:10:18.652039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.430 [2024-07-22 13:10:18.652067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:47:59.430 [2024-07-22 13:10:18.655635] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.430 [2024-07-22 13:10:18.655686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.430 [2024-07-22 13:10:18.655713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:47:59.430 [2024-07-22 13:10:18.659157] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.430 [2024-07-22 13:10:18.659216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.430 [2024-07-22 13:10:18.659244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:59.430 [2024-07-22 13:10:18.662627] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.430 [2024-07-22 13:10:18.662675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.430 [2024-07-22 13:10:18.662702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:47:59.430 [2024-07-22 13:10:18.666157] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.430 [2024-07-22 13:10:18.666215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.430 [2024-07-22 13:10:18.666243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:47:59.430 [2024-07-22 13:10:18.669951] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.430 [2024-07-22 13:10:18.670002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.430 [2024-07-22 13:10:18.670030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:47:59.430 [2024-07-22 13:10:18.673674] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.430 [2024-07-22 13:10:18.673724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.430 [2024-07-22 13:10:18.673752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:59.430 [2024-07-22 13:10:18.677791] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.430 [2024-07-22 13:10:18.677843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.430 [2024-07-22 13:10:18.677871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:47:59.430 [2024-07-22 13:10:18.681074] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.430 [2024-07-22 13:10:18.681126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.430 [2024-07-22 13:10:18.681181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:47:59.430 [2024-07-22 13:10:18.685230] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.430 [2024-07-22 13:10:18.685282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.430 [2024-07-22 13:10:18.685311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:47:59.430 [2024-07-22 13:10:18.688718] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.430 [2024-07-22 13:10:18.688770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.431 [2024-07-22 13:10:18.688797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:59.431 [2024-07-22 13:10:18.692879] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.431 [2024-07-22 13:10:18.692932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.431 [2024-07-22 13:10:18.692959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:47:59.431 [2024-07-22 13:10:18.696902] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.431 [2024-07-22 13:10:18.696954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.431 [2024-07-22 13:10:18.696982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:47:59.431 [2024-07-22 13:10:18.700698] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.431 [2024-07-22 13:10:18.700750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.431 [2024-07-22 13:10:18.700777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:47:59.431 [2024-07-22 13:10:18.704594] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.431 [2024-07-22 13:10:18.704646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.431 [2024-07-22 13:10:18.704674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:59.431 [2024-07-22 13:10:18.708463] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.431 [2024-07-22 13:10:18.708514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.431 [2024-07-22 13:10:18.708541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:47:59.431 [2024-07-22 13:10:18.711942] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.431 [2024-07-22 13:10:18.711994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.431 [2024-07-22 13:10:18.712021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:47:59.431 [2024-07-22 13:10:18.715696] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.431 [2024-07-22 13:10:18.715751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.431 [2024-07-22 13:10:18.715779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:47:59.431 [2024-07-22 13:10:18.719481] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.431 [2024-07-22 13:10:18.719557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.431 [2024-07-22 13:10:18.719594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:59.431 [2024-07-22 13:10:18.722978] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.431 [2024-07-22 13:10:18.723032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.431 [2024-07-22 13:10:18.723060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:47:59.431 [2024-07-22 13:10:18.726256] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.431 [2024-07-22 13:10:18.726307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.431 [2024-07-22 13:10:18.726334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:47:59.431 [2024-07-22 13:10:18.730120] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.431 [2024-07-22 13:10:18.730182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.431 [2024-07-22 13:10:18.730211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:47:59.431 [2024-07-22 13:10:18.733375] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.431 [2024-07-22 13:10:18.733426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.431 [2024-07-22 13:10:18.733455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:59.431 [2024-07-22 13:10:18.737381] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.431 [2024-07-22 13:10:18.737433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.431 [2024-07-22 13:10:18.737462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:47:59.431 [2024-07-22 13:10:18.741189] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.431 [2024-07-22 13:10:18.741240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.431 [2024-07-22 13:10:18.741268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:47:59.431 [2024-07-22 13:10:18.744695] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.431 [2024-07-22 13:10:18.744747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.431 [2024-07-22 13:10:18.744775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:47:59.431 [2024-07-22 13:10:18.748276] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.431 [2024-07-22 13:10:18.748334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.431 [2024-07-22 13:10:18.748362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:59.431 [2024-07-22 13:10:18.751876] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.431 [2024-07-22 13:10:18.751928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.431 [2024-07-22 13:10:18.751956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:47:59.431 [2024-07-22 13:10:18.755433] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.431 [2024-07-22 13:10:18.755485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.431 [2024-07-22 13:10:18.755513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:47:59.431 [2024-07-22 13:10:18.758518] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.431 [2024-07-22 13:10:18.758568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.431 [2024-07-22 13:10:18.758595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:47:59.431 [2024-07-22 13:10:18.762671] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.431 [2024-07-22 13:10:18.762725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.431 [2024-07-22 13:10:18.762754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:59.431 [2024-07-22 13:10:18.766271] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.431 [2024-07-22 13:10:18.766325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.431 [2024-07-22 13:10:18.766354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:47:59.431 [2024-07-22 13:10:18.770362] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.431 [2024-07-22 13:10:18.770416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.431 [2024-07-22 13:10:18.770447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:47:59.431 [2024-07-22 13:10:18.774209] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.431 [2024-07-22 13:10:18.774258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.431 [2024-07-22 13:10:18.774273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:47:59.431 [2024-07-22 13:10:18.778104] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.431 [2024-07-22 13:10:18.778196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.431 [2024-07-22 13:10:18.778213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:59.431 [2024-07-22 13:10:18.782762] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.431 [2024-07-22 13:10:18.782803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.431 [2024-07-22 13:10:18.782817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:47:59.431 [2024-07-22 13:10:18.786994] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.431 [2024-07-22 13:10:18.787047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.431 [2024-07-22 13:10:18.787075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:47:59.431 [2024-07-22 13:10:18.790733] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.432 [2024-07-22 13:10:18.790787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.432 [2024-07-22 13:10:18.790816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:47:59.432 [2024-07-22 13:10:18.795122] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.432 [2024-07-22 13:10:18.795197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.432 [2024-07-22 13:10:18.795226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:59.432 [2024-07-22 13:10:18.799184] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.432 [2024-07-22 13:10:18.799246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.432 [2024-07-22 13:10:18.799275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:47:59.432 [2024-07-22 13:10:18.803256] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.432 [2024-07-22 13:10:18.803309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.432 [2024-07-22 13:10:18.803339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:47:59.432 [2024-07-22 13:10:18.807389] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.432 [2024-07-22 13:10:18.807443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.432 [2024-07-22 13:10:18.807472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:47:59.432 [2024-07-22 13:10:18.811341] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.432 [2024-07-22 13:10:18.811394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.432 [2024-07-22 13:10:18.811423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:59.432 [2024-07-22 13:10:18.815436] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.432 [2024-07-22 13:10:18.815489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.432 [2024-07-22 13:10:18.815518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:47:59.432 [2024-07-22 13:10:18.819333] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.432 [2024-07-22 13:10:18.819386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.432 [2024-07-22 13:10:18.819415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:47:59.432 [2024-07-22 13:10:18.823228] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.432 [2024-07-22 13:10:18.823281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.432 [2024-07-22 13:10:18.823310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:47:59.432 [2024-07-22 13:10:18.827175] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.432 [2024-07-22 13:10:18.827238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.432 [2024-07-22 13:10:18.827267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:59.432 [2024-07-22 13:10:18.830911] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.432 [2024-07-22 13:10:18.830979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.432 [2024-07-22 13:10:18.831007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:47:59.432 [2024-07-22 13:10:18.834662] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.432 [2024-07-22 13:10:18.834713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.432 [2024-07-22 13:10:18.834741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:47:59.432 [2024-07-22 13:10:18.838815] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.432 [2024-07-22 13:10:18.838869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.432 [2024-07-22 13:10:18.838899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:47:59.432 [2024-07-22 13:10:18.842379] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.432 [2024-07-22 13:10:18.842432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.432 [2024-07-22 13:10:18.842460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:59.432 [2024-07-22 13:10:18.846143] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.432 [2024-07-22 13:10:18.846190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.432 [2024-07-22 13:10:18.846220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:47:59.692 [2024-07-22 13:10:18.849850] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.692 [2024-07-22 13:10:18.849906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.692 [2024-07-22 13:10:18.849950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:47:59.692 [2024-07-22 13:10:18.854309] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.692 [2024-07-22 13:10:18.854364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.692 [2024-07-22 13:10:18.854393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:47:59.692 [2024-07-22 13:10:18.859056] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.692 [2024-07-22 13:10:18.859111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.692 [2024-07-22 13:10:18.859140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:59.692 [2024-07-22 13:10:18.863298] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.692 [2024-07-22 13:10:18.863352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.692 [2024-07-22 13:10:18.863381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:47:59.692 [2024-07-22 13:10:18.867381] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.692 [2024-07-22 13:10:18.867433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.692 [2024-07-22 13:10:18.867461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:47:59.692 [2024-07-22 13:10:18.870630] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.692 [2024-07-22 13:10:18.870668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.692 [2024-07-22 13:10:18.870696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:47:59.692 [2024-07-22 13:10:18.874506] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.692 [2024-07-22 13:10:18.874557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.692 [2024-07-22 13:10:18.874586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:59.692 [2024-07-22 13:10:18.878487] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.692 [2024-07-22 13:10:18.878538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.692 [2024-07-22 13:10:18.878566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:47:59.692 [2024-07-22 13:10:18.882460] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.692 [2024-07-22 13:10:18.882514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.692 [2024-07-22 13:10:18.882542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:47:59.692 [2024-07-22 13:10:18.886201] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.692 [2024-07-22 13:10:18.886263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.692 [2024-07-22 13:10:18.886291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:47:59.692 [2024-07-22 13:10:18.890296] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.692 [2024-07-22 13:10:18.890348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.692 [2024-07-22 13:10:18.890376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:59.692 [2024-07-22 13:10:18.894031] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.692 [2024-07-22 13:10:18.894083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.692 [2024-07-22 13:10:18.894111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:47:59.692 [2024-07-22 13:10:18.896989] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.692 [2024-07-22 13:10:18.897041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.692 [2024-07-22 13:10:18.897068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:47:59.692 [2024-07-22 13:10:18.901093] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.692 [2024-07-22 13:10:18.901185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.692 [2024-07-22 13:10:18.901200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:47:59.692 [2024-07-22 13:10:18.905128] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.692 [2024-07-22 13:10:18.905226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.693 [2024-07-22 13:10:18.905254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:59.693 [2024-07-22 13:10:18.908519] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.693 [2024-07-22 13:10:18.908570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.693 [2024-07-22 13:10:18.908599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:47:59.693 [2024-07-22 13:10:18.911714] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.693 [2024-07-22 13:10:18.911765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.693 [2024-07-22 13:10:18.911793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:47:59.693 [2024-07-22 13:10:18.915723] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.693 [2024-07-22 13:10:18.915774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.693 [2024-07-22 13:10:18.915802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:47:59.693 [2024-07-22 13:10:18.919581] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.693 [2024-07-22 13:10:18.919632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.693 [2024-07-22 13:10:18.919660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:59.693 [2024-07-22 13:10:18.923208] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.693 [2024-07-22 13:10:18.923259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.693 [2024-07-22 13:10:18.923287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:47:59.693 [2024-07-22 13:10:18.926539] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.693 [2024-07-22 13:10:18.926589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.693 [2024-07-22 13:10:18.926651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:47:59.693 [2024-07-22 13:10:18.929759] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.693 [2024-07-22 13:10:18.929810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.693 [2024-07-22 13:10:18.929837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:47:59.693 [2024-07-22 13:10:18.933591] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.693 [2024-07-22 13:10:18.933644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.693 [2024-07-22 13:10:18.933672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:59.693 [2024-07-22 13:10:18.937251] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.693 [2024-07-22 13:10:18.937303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.693 [2024-07-22 13:10:18.937331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:47:59.693 [2024-07-22 13:10:18.940740] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.693 [2024-07-22 13:10:18.940791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.693 [2024-07-22 13:10:18.940819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:47:59.693 [2024-07-22 13:10:18.944221] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.693 [2024-07-22 13:10:18.944273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.693 [2024-07-22 13:10:18.944301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:47:59.693 [2024-07-22 13:10:18.947938] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.693 [2024-07-22 13:10:18.947991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.693 [2024-07-22 13:10:18.948019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:59.693 [2024-07-22 13:10:18.951429] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.693 [2024-07-22 13:10:18.951481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.693 [2024-07-22 13:10:18.951509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:47:59.693 [2024-07-22 13:10:18.954742] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.693 [2024-07-22 13:10:18.954796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.693 [2024-07-22 13:10:18.954825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:47:59.693 [2024-07-22 13:10:18.958251] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.693 [2024-07-22 13:10:18.958301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.693 [2024-07-22 13:10:18.958329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:47:59.693 [2024-07-22 13:10:18.961730] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.693 [2024-07-22 13:10:18.961781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.693 [2024-07-22 13:10:18.961809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:59.693 [2024-07-22 13:10:18.965536] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.693 [2024-07-22 13:10:18.965604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.693 [2024-07-22 13:10:18.965631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:47:59.693 [2024-07-22 13:10:18.968777] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.693 [2024-07-22 13:10:18.968828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.693 [2024-07-22 13:10:18.968855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:47:59.693 [2024-07-22 13:10:18.972889] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.693 [2024-07-22 13:10:18.972941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.693 [2024-07-22 13:10:18.972969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:47:59.693 [2024-07-22 13:10:18.976343] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.693 [2024-07-22 13:10:18.976395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.693 [2024-07-22 13:10:18.976423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:59.693 [2024-07-22 13:10:18.979896] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.693 [2024-07-22 13:10:18.979947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.693 [2024-07-22 13:10:18.979975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:47:59.693 [2024-07-22 13:10:18.983650] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.693 [2024-07-22 13:10:18.983702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.693 [2024-07-22 13:10:18.983729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:47:59.693 [2024-07-22 13:10:18.987155] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.693 [2024-07-22 13:10:18.987216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.693 [2024-07-22 13:10:18.987245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:47:59.693 [2024-07-22 13:10:18.990577] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.693 [2024-07-22 13:10:18.990661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.693 [2024-07-22 13:10:18.990690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:59.693 [2024-07-22 13:10:18.993807] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.693 [2024-07-22 13:10:18.993857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.693 [2024-07-22 13:10:18.993884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:47:59.693 [2024-07-22 13:10:18.997016] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.693 [2024-07-22 13:10:18.997067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.693 [2024-07-22 13:10:18.997095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:47:59.694 [2024-07-22 13:10:19.000757] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.694 [2024-07-22 13:10:19.000809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.694 [2024-07-22 13:10:19.000837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:47:59.694 [2024-07-22 13:10:19.005340] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.694 [2024-07-22 13:10:19.005392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.694 [2024-07-22 13:10:19.005420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:59.694 [2024-07-22 13:10:19.008613] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.694 [2024-07-22 13:10:19.008664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.694 [2024-07-22 13:10:19.008692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:47:59.694 [2024-07-22 13:10:19.011867] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.694 [2024-07-22 13:10:19.011920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.694 [2024-07-22 13:10:19.011947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:47:59.694 [2024-07-22 13:10:19.015111] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.694 [2024-07-22 13:10:19.015172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.694 [2024-07-22 13:10:19.015200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:47:59.694 [2024-07-22 13:10:19.018576] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.694 [2024-07-22 13:10:19.018653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.694 [2024-07-22 13:10:19.018683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:59.694 [2024-07-22 13:10:19.021933] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.694 [2024-07-22 13:10:19.021973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.694 [2024-07-22 13:10:19.022001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:47:59.694 [2024-07-22 13:10:19.025691] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.694 [2024-07-22 13:10:19.025734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.694 [2024-07-22 13:10:19.025762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:47:59.694 [2024-07-22 13:10:19.030275] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.694 [2024-07-22 13:10:19.030328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.694 [2024-07-22 13:10:19.030356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:47:59.694 [2024-07-22 13:10:19.034178] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.694 [2024-07-22 13:10:19.034229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.694 [2024-07-22 13:10:19.034257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:59.694 [2024-07-22 13:10:19.037597] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.694 [2024-07-22 13:10:19.037648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.694 [2024-07-22 13:10:19.037676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:47:59.694 [2024-07-22 13:10:19.040822] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.694 [2024-07-22 13:10:19.040873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.694 [2024-07-22 13:10:19.040900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:47:59.694 [2024-07-22 13:10:19.044437] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.694 [2024-07-22 13:10:19.044488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.694 [2024-07-22 13:10:19.044515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:47:59.694 [2024-07-22 13:10:19.048360] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.694 [2024-07-22 13:10:19.048398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.694 [2024-07-22 13:10:19.048427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:59.694 [2024-07-22 13:10:19.051956] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.694 [2024-07-22 13:10:19.052007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.694 [2024-07-22 13:10:19.052034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:47:59.694 [2024-07-22 13:10:19.055495] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.694 [2024-07-22 13:10:19.055530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.694 [2024-07-22 13:10:19.055558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:47:59.694 [2024-07-22 13:10:19.059017] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.694 [2024-07-22 13:10:19.059051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.694 [2024-07-22 13:10:19.059079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:47:59.694 [2024-07-22 13:10:19.062887] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.694 [2024-07-22 13:10:19.062940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.694 [2024-07-22 13:10:19.062968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:59.694 [2024-07-22 13:10:19.066080] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.694 [2024-07-22 13:10:19.066116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.694 [2024-07-22 13:10:19.066145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:47:59.694 [2024-07-22 13:10:19.069699] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.694 [2024-07-22 13:10:19.069734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.694 [2024-07-22 13:10:19.069762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:47:59.694 [2024-07-22 13:10:19.073455] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.694 [2024-07-22 13:10:19.073490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.694 [2024-07-22 13:10:19.073518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:47:59.694 [2024-07-22 13:10:19.076512] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.694 [2024-07-22 13:10:19.076548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.694 [2024-07-22 13:10:19.076576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:59.694 [2024-07-22 13:10:19.080086] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.694 [2024-07-22 13:10:19.080121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.694 [2024-07-22 13:10:19.080159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:47:59.694 [2024-07-22 13:10:19.083705] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.694 [2024-07-22 13:10:19.083739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.694 [2024-07-22 13:10:19.083767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:47:59.694 [2024-07-22 13:10:19.087017] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.694 [2024-07-22 13:10:19.087052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.694 [2024-07-22 13:10:19.087080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:47:59.694 [2024-07-22 13:10:19.090833] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.694 [2024-07-22 13:10:19.090870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.694 [2024-07-22 13:10:19.090899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:59.694 [2024-07-22 13:10:19.094233] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.694 [2024-07-22 13:10:19.094285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.694 [2024-07-22 13:10:19.094315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:47:59.695 [2024-07-22 13:10:19.097722] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.695 [2024-07-22 13:10:19.097757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.695 [2024-07-22 13:10:19.097786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:47:59.695 [2024-07-22 13:10:19.102639] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.695 [2024-07-22 13:10:19.102677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.695 [2024-07-22 13:10:19.102707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:47:59.695 [2024-07-22 13:10:19.106341] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.695 [2024-07-22 13:10:19.106379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.695 [2024-07-22 13:10:19.106409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:59.695 [2024-07-22 13:10:19.110788] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.695 [2024-07-22 13:10:19.110832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.695 [2024-07-22 13:10:19.110847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:47:59.955 [2024-07-22 13:10:19.115084] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.955 [2024-07-22 13:10:19.115123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.955 [2024-07-22 13:10:19.115177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:47:59.955 [2024-07-22 13:10:19.119448] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.955 [2024-07-22 13:10:19.119504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.955 [2024-07-22 13:10:19.119554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:47:59.955 [2024-07-22 13:10:19.123045] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.955 [2024-07-22 13:10:19.123082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.955 [2024-07-22 13:10:19.123111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:59.955 [2024-07-22 13:10:19.127014] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.955 [2024-07-22 13:10:19.127050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.955 [2024-07-22 13:10:19.127078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:47:59.955 [2024-07-22 13:10:19.130480] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.955 [2024-07-22 13:10:19.130747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.955 [2024-07-22 13:10:19.130950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:47:59.955 [2024-07-22 13:10:19.134560] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.955 [2024-07-22 13:10:19.134781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.955 [2024-07-22 13:10:19.135047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:47:59.955 [2024-07-22 13:10:19.139315] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.955 [2024-07-22 13:10:19.139524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.955 [2024-07-22 13:10:19.139665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:59.955 [2024-07-22 13:10:19.143260] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.955 [2024-07-22 13:10:19.143468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.955 [2024-07-22 13:10:19.143610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:47:59.955 [2024-07-22 13:10:19.147652] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.955 [2024-07-22 13:10:19.147848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.955 [2024-07-22 13:10:19.147988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:47:59.955 [2024-07-22 13:10:19.151990] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.955 [2024-07-22 13:10:19.152223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.955 [2024-07-22 13:10:19.152378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:47:59.955 [2024-07-22 13:10:19.155921] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.955 [2024-07-22 13:10:19.155958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.955 [2024-07-22 13:10:19.155987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:59.955 [2024-07-22 13:10:19.158895] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.955 [2024-07-22 13:10:19.158948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.955 [2024-07-22 13:10:19.158976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:47:59.955 [2024-07-22 13:10:19.162597] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.955 [2024-07-22 13:10:19.162657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.955 [2024-07-22 13:10:19.162685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:47:59.955 [2024-07-22 13:10:19.166103] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.955 [2024-07-22 13:10:19.166163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.955 [2024-07-22 13:10:19.166177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:47:59.955 [2024-07-22 13:10:19.169764] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.955 [2024-07-22 13:10:19.169800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.955 [2024-07-22 13:10:19.169828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:59.955 [2024-07-22 13:10:19.172628] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.955 [2024-07-22 13:10:19.172664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.955 [2024-07-22 13:10:19.172692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:47:59.955 [2024-07-22 13:10:19.176481] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.955 [2024-07-22 13:10:19.176516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.955 [2024-07-22 13:10:19.176544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:47:59.955 [2024-07-22 13:10:19.179974] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.955 [2024-07-22 13:10:19.180009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.955 [2024-07-22 13:10:19.180037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:47:59.955 [2024-07-22 13:10:19.183723] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.955 [2024-07-22 13:10:19.183904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.955 [2024-07-22 13:10:19.183937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:59.955 [2024-07-22 13:10:19.187145] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.955 [2024-07-22 13:10:19.187202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.955 [2024-07-22 13:10:19.187216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:47:59.955 [2024-07-22 13:10:19.190698] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.956 [2024-07-22 13:10:19.190734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.956 [2024-07-22 13:10:19.190762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:47:59.956 [2024-07-22 13:10:19.194430] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.956 [2024-07-22 13:10:19.194465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.956 [2024-07-22 13:10:19.194493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:47:59.956 [2024-07-22 13:10:19.197409] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.956 [2024-07-22 13:10:19.197444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.956 [2024-07-22 13:10:19.197472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:59.956 [2024-07-22 13:10:19.200793] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.956 [2024-07-22 13:10:19.200829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.956 [2024-07-22 13:10:19.200856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:47:59.956 [2024-07-22 13:10:19.204269] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.956 [2024-07-22 13:10:19.204304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.956 [2024-07-22 13:10:19.204333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:47:59.956 [2024-07-22 13:10:19.208048] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.956 [2024-07-22 13:10:19.208084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.956 [2024-07-22 13:10:19.208112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:47:59.956 [2024-07-22 13:10:19.211601] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.956 [2024-07-22 13:10:19.211638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.956 [2024-07-22 13:10:19.211666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:59.956 [2024-07-22 13:10:19.215345] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.956 [2024-07-22 13:10:19.215380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.956 [2024-07-22 13:10:19.215408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:47:59.956 [2024-07-22 13:10:19.218498] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.956 [2024-07-22 13:10:19.218533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.956 [2024-07-22 13:10:19.218561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:47:59.956 [2024-07-22 13:10:19.221738] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.956 [2024-07-22 13:10:19.221772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.956 [2024-07-22 13:10:19.221800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:47:59.956 [2024-07-22 13:10:19.224838] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.956 [2024-07-22 13:10:19.224874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.956 [2024-07-22 13:10:19.224903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:59.956 [2024-07-22 13:10:19.228347] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.956 [2024-07-22 13:10:19.228382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.956 [2024-07-22 13:10:19.228410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:47:59.956 [2024-07-22 13:10:19.231708] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.956 [2024-07-22 13:10:19.231743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.956 [2024-07-22 13:10:19.231770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:47:59.956 [2024-07-22 13:10:19.235086] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.956 [2024-07-22 13:10:19.235121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.956 [2024-07-22 13:10:19.235159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:47:59.956 [2024-07-22 13:10:19.238317] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.956 [2024-07-22 13:10:19.238351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.956 [2024-07-22 13:10:19.238378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:59.956 [2024-07-22 13:10:19.242256] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.956 [2024-07-22 13:10:19.242292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.956 [2024-07-22 13:10:19.242320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:47:59.956 [2024-07-22 13:10:19.246226] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.956 [2024-07-22 13:10:19.246259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.956 [2024-07-22 13:10:19.246287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:47:59.956 [2024-07-22 13:10:19.249296] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.956 [2024-07-22 13:10:19.249331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.956 [2024-07-22 13:10:19.249360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:47:59.956 [2024-07-22 13:10:19.252041] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.956 [2024-07-22 13:10:19.252076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.956 [2024-07-22 13:10:19.252104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:59.956 [2024-07-22 13:10:19.255894] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.956 [2024-07-22 13:10:19.255930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.956 [2024-07-22 13:10:19.255958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:47:59.956 [2024-07-22 13:10:19.258845] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.956 [2024-07-22 13:10:19.258882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.956 [2024-07-22 13:10:19.258911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:47:59.956 [2024-07-22 13:10:19.262151] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.956 [2024-07-22 13:10:19.262183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.956 [2024-07-22 13:10:19.262211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:47:59.956 [2024-07-22 13:10:19.266064] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.956 [2024-07-22 13:10:19.266099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.956 [2024-07-22 13:10:19.266129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:59.956 [2024-07-22 13:10:19.269662] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.956 [2024-07-22 13:10:19.269696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.956 [2024-07-22 13:10:19.269724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:47:59.956 [2024-07-22 13:10:19.273084] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.956 [2024-07-22 13:10:19.273120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.956 [2024-07-22 13:10:19.273158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:47:59.956 [2024-07-22 13:10:19.276317] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.956 [2024-07-22 13:10:19.276351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.956 [2024-07-22 13:10:19.276378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:47:59.956 [2024-07-22 13:10:19.280359] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.956 [2024-07-22 13:10:19.280396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.956 [2024-07-22 13:10:19.280424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:59.956 [2024-07-22 13:10:19.284400] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.957 [2024-07-22 13:10:19.284439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.957 [2024-07-22 13:10:19.284467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:47:59.957 [2024-07-22 13:10:19.288767] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.957 [2024-07-22 13:10:19.288806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.957 [2024-07-22 13:10:19.288834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:47:59.957 [2024-07-22 13:10:19.292710] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.957 [2024-07-22 13:10:19.292745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.957 [2024-07-22 13:10:19.292774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:47:59.957 [2024-07-22 13:10:19.296055] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.957 [2024-07-22 13:10:19.296091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.957 [2024-07-22 13:10:19.296119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:59.957 [2024-07-22 13:10:19.299778] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.957 [2024-07-22 13:10:19.299814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.957 [2024-07-22 13:10:19.299843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:47:59.957 [2024-07-22 13:10:19.302901] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.957 [2024-07-22 13:10:19.302969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.957 [2024-07-22 13:10:19.302997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:47:59.957 [2024-07-22 13:10:19.306143] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.957 [2024-07-22 13:10:19.306196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.957 [2024-07-22 13:10:19.306210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:47:59.957 [2024-07-22 13:10:19.309402] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.957 [2024-07-22 13:10:19.309438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.957 [2024-07-22 13:10:19.309451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:59.957 [2024-07-22 13:10:19.313616] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.957 [2024-07-22 13:10:19.313650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.957 [2024-07-22 13:10:19.313679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:47:59.957 [2024-07-22 13:10:19.317266] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.957 [2024-07-22 13:10:19.317301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.957 [2024-07-22 13:10:19.317330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:47:59.957 [2024-07-22 13:10:19.320955] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.957 [2024-07-22 13:10:19.320990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.957 [2024-07-22 13:10:19.321018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:47:59.957 [2024-07-22 13:10:19.325252] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.957 [2024-07-22 13:10:19.325288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.957 [2024-07-22 13:10:19.325316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:59.957 [2024-07-22 13:10:19.329198] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.957 [2024-07-22 13:10:19.329233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.957 [2024-07-22 13:10:19.329261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:47:59.957 [2024-07-22 13:10:19.332926] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.957 [2024-07-22 13:10:19.332961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.957 [2024-07-22 13:10:19.332990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:47:59.957 [2024-07-22 13:10:19.336601] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.957 [2024-07-22 13:10:19.336637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.957 [2024-07-22 13:10:19.336651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:47:59.957 [2024-07-22 13:10:19.340688] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.957 [2024-07-22 13:10:19.340724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.957 [2024-07-22 13:10:19.340752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:59.957 [2024-07-22 13:10:19.344586] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.957 [2024-07-22 13:10:19.344621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.957 [2024-07-22 13:10:19.344650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:47:59.957 [2024-07-22 13:10:19.348536] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.957 [2024-07-22 13:10:19.348571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.957 [2024-07-22 13:10:19.348600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:47:59.957 [2024-07-22 13:10:19.352014] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.957 [2024-07-22 13:10:19.352049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.957 [2024-07-22 13:10:19.352077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:47:59.957 [2024-07-22 13:10:19.355824] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.957 [2024-07-22 13:10:19.355860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.957 [2024-07-22 13:10:19.355888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:59.957 [2024-07-22 13:10:19.359129] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.957 [2024-07-22 13:10:19.359203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.957 [2024-07-22 13:10:19.359217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:47:59.957 [2024-07-22 13:10:19.362183] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.957 [2024-07-22 13:10:19.362217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.957 [2024-07-22 13:10:19.362244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:47:59.957 [2024-07-22 13:10:19.365723] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.957 [2024-07-22 13:10:19.365757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.957 [2024-07-22 13:10:19.365785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:47:59.957 [2024-07-22 13:10:19.368864] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.957 [2024-07-22 13:10:19.368899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.957 [2024-07-22 13:10:19.368926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:47:59.957 [2024-07-22 13:10:19.372326] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:47:59.957 [2024-07-22 13:10:19.372362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:59.957 [2024-07-22 13:10:19.372391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:00.218 [2024-07-22 13:10:19.377471] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.218 [2024-07-22 13:10:19.377510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.218 [2024-07-22 13:10:19.377540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:00.219 [2024-07-22 13:10:19.381133] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.219 [2024-07-22 13:10:19.381191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.219 [2024-07-22 13:10:19.381205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:00.219 [2024-07-22 13:10:19.384841] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.219 [2024-07-22 13:10:19.384879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.219 [2024-07-22 13:10:19.384908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:00.219 [2024-07-22 13:10:19.388988] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.219 [2024-07-22 13:10:19.389024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.219 [2024-07-22 13:10:19.389053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:00.219 [2024-07-22 13:10:19.392691] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.219 [2024-07-22 13:10:19.392728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.219 [2024-07-22 13:10:19.392756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:00.219 [2024-07-22 13:10:19.395826] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.219 [2024-07-22 13:10:19.395863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.219 [2024-07-22 13:10:19.395890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:00.219 [2024-07-22 13:10:19.399662] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.219 [2024-07-22 13:10:19.399699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.219 [2024-07-22 13:10:19.399727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:00.219 [2024-07-22 13:10:19.403596] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.219 [2024-07-22 13:10:19.403631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.219 [2024-07-22 13:10:19.403660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:00.219 [2024-07-22 13:10:19.407098] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.219 [2024-07-22 13:10:19.407132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.219 [2024-07-22 13:10:19.407185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:00.219 [2024-07-22 13:10:19.410657] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.219 [2024-07-22 13:10:19.410694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.219 [2024-07-22 13:10:19.410723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:00.219 [2024-07-22 13:10:19.413470] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.219 [2024-07-22 13:10:19.413505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.219 [2024-07-22 13:10:19.413534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:00.219 [2024-07-22 13:10:19.416836] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.219 [2024-07-22 13:10:19.416872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.219 [2024-07-22 13:10:19.416900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:00.219 [2024-07-22 13:10:19.420266] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.219 [2024-07-22 13:10:19.420301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.219 [2024-07-22 13:10:19.420329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:00.219 [2024-07-22 13:10:19.424009] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.219 [2024-07-22 13:10:19.424045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.219 [2024-07-22 13:10:19.424073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:00.219 [2024-07-22 13:10:19.427496] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.219 [2024-07-22 13:10:19.427530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.219 [2024-07-22 13:10:19.427558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:00.219 [2024-07-22 13:10:19.431779] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.219 [2024-07-22 13:10:19.431818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.219 [2024-07-22 13:10:19.431847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:00.219 [2024-07-22 13:10:19.436431] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.219 [2024-07-22 13:10:19.436471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.219 [2024-07-22 13:10:19.436501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:00.219 [2024-07-22 13:10:19.439968] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.219 [2024-07-22 13:10:19.440006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.219 [2024-07-22 13:10:19.440034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:00.219 [2024-07-22 13:10:19.443779] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.219 [2024-07-22 13:10:19.443814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.219 [2024-07-22 13:10:19.443843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:00.219 [2024-07-22 13:10:19.447179] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.219 [2024-07-22 13:10:19.447211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.219 [2024-07-22 13:10:19.447239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:00.219 [2024-07-22 13:10:19.450492] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.219 [2024-07-22 13:10:19.450526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.219 [2024-07-22 13:10:19.450554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:00.219 [2024-07-22 13:10:19.453931] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.219 [2024-07-22 13:10:19.453965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.219 [2024-07-22 13:10:19.453993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:00.219 [2024-07-22 13:10:19.456260] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.219 [2024-07-22 13:10:19.456294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.219 [2024-07-22 13:10:19.456322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:00.219 [2024-07-22 13:10:19.459408] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.219 [2024-07-22 13:10:19.459443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.219 [2024-07-22 13:10:19.459470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:00.219 [2024-07-22 13:10:19.462757] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.219 [2024-07-22 13:10:19.462794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.219 [2024-07-22 13:10:19.462822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:00.219 [2024-07-22 13:10:19.466278] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.219 [2024-07-22 13:10:19.466315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.219 [2024-07-22 13:10:19.466344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:00.219 [2024-07-22 13:10:19.469985] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.219 [2024-07-22 13:10:19.470021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.219 [2024-07-22 13:10:19.470049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:00.219 [2024-07-22 13:10:19.473246] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.219 [2024-07-22 13:10:19.473279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.219 [2024-07-22 13:10:19.473307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:00.220 [2024-07-22 13:10:19.476920] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.220 [2024-07-22 13:10:19.476954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.220 [2024-07-22 13:10:19.476982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:00.220 [2024-07-22 13:10:19.479963] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.220 [2024-07-22 13:10:19.479997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.220 [2024-07-22 13:10:19.480025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:00.220 [2024-07-22 13:10:19.483768] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.220 [2024-07-22 13:10:19.483804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.220 [2024-07-22 13:10:19.483832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:00.220 [2024-07-22 13:10:19.487183] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.220 [2024-07-22 13:10:19.487218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.220 [2024-07-22 13:10:19.487245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:00.220 [2024-07-22 13:10:19.490710] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.220 [2024-07-22 13:10:19.490747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.220 [2024-07-22 13:10:19.490776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:00.220 [2024-07-22 13:10:19.494130] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.220 [2024-07-22 13:10:19.494173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.220 [2024-07-22 13:10:19.494201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:00.220 [2024-07-22 13:10:19.497326] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.220 [2024-07-22 13:10:19.497359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.220 [2024-07-22 13:10:19.497387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:00.220 [2024-07-22 13:10:19.501324] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.220 [2024-07-22 13:10:19.501359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.220 [2024-07-22 13:10:19.501387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:00.220 [2024-07-22 13:10:19.504481] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.220 [2024-07-22 13:10:19.504518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.220 [2024-07-22 13:10:19.504560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:00.220 [2024-07-22 13:10:19.508494] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.220 [2024-07-22 13:10:19.508530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.220 [2024-07-22 13:10:19.508573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:00.220 [2024-07-22 13:10:19.511880] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.220 [2024-07-22 13:10:19.511916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.220 [2024-07-22 13:10:19.511945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:00.220 [2024-07-22 13:10:19.515285] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.220 [2024-07-22 13:10:19.515319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.220 [2024-07-22 13:10:19.515347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:00.220 [2024-07-22 13:10:19.519105] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.220 [2024-07-22 13:10:19.519158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.220 [2024-07-22 13:10:19.519172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:00.220 [2024-07-22 13:10:19.522646] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.220 [2024-07-22 13:10:19.522681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.220 [2024-07-22 13:10:19.522709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:00.220 [2024-07-22 13:10:19.525823] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.220 [2024-07-22 13:10:19.525858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.220 [2024-07-22 13:10:19.525886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:00.220 [2024-07-22 13:10:19.529371] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.220 [2024-07-22 13:10:19.529406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.220 [2024-07-22 13:10:19.529434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:00.220 [2024-07-22 13:10:19.533245] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.220 [2024-07-22 13:10:19.533281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.220 [2024-07-22 13:10:19.533309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:00.220 [2024-07-22 13:10:19.537467] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.220 [2024-07-22 13:10:19.537504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.220 [2024-07-22 13:10:19.537534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:00.220 [2024-07-22 13:10:19.542140] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.220 [2024-07-22 13:10:19.542204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.220 [2024-07-22 13:10:19.542220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:00.220 [2024-07-22 13:10:19.546225] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.220 [2024-07-22 13:10:19.546262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.220 [2024-07-22 13:10:19.546292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:00.220 [2024-07-22 13:10:19.549438] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.220 [2024-07-22 13:10:19.549474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.220 [2024-07-22 13:10:19.549503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:00.220 [2024-07-22 13:10:19.552868] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.220 [2024-07-22 13:10:19.552905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.220 [2024-07-22 13:10:19.552933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:00.220 [2024-07-22 13:10:19.556156] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.220 [2024-07-22 13:10:19.556201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.220 [2024-07-22 13:10:19.556229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:00.220 [2024-07-22 13:10:19.559764] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.220 [2024-07-22 13:10:19.559799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.220 [2024-07-22 13:10:19.559827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:00.220 [2024-07-22 13:10:19.563424] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.220 [2024-07-22 13:10:19.563460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.220 [2024-07-22 13:10:19.563488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:00.220 [2024-07-22 13:10:19.567112] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.220 [2024-07-22 13:10:19.567171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.220 [2024-07-22 13:10:19.567184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:00.220 [2024-07-22 13:10:19.571268] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.220 [2024-07-22 13:10:19.571304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.221 [2024-07-22 13:10:19.571332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:00.221 [2024-07-22 13:10:19.574859] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.221 [2024-07-22 13:10:19.574896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.221 [2024-07-22 13:10:19.574925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:00.221 [2024-07-22 13:10:19.578778] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.221 [2024-07-22 13:10:19.578814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.221 [2024-07-22 13:10:19.578843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:00.221 [2024-07-22 13:10:19.581857] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.221 [2024-07-22 13:10:19.581892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.221 [2024-07-22 13:10:19.581921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:00.221 [2024-07-22 13:10:19.585620] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.221 [2024-07-22 13:10:19.585656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.221 [2024-07-22 13:10:19.585684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:00.221 [2024-07-22 13:10:19.589371] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.221 [2024-07-22 13:10:19.589406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.221 [2024-07-22 13:10:19.589434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:00.221 [2024-07-22 13:10:19.592816] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.221 [2024-07-22 13:10:19.592852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.221 [2024-07-22 13:10:19.592880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:00.221 [2024-07-22 13:10:19.596012] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.221 [2024-07-22 13:10:19.596048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.221 [2024-07-22 13:10:19.596076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:00.221 [2024-07-22 13:10:19.599870] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.221 [2024-07-22 13:10:19.599906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.221 [2024-07-22 13:10:19.599934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:00.221 [2024-07-22 13:10:19.603869] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.221 [2024-07-22 13:10:19.603905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.221 [2024-07-22 13:10:19.603933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:00.221 [2024-07-22 13:10:19.607046] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.221 [2024-07-22 13:10:19.607080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.221 [2024-07-22 13:10:19.607108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:00.221 [2024-07-22 13:10:19.610273] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.221 [2024-07-22 13:10:19.610307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.221 [2024-07-22 13:10:19.610336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:00.221 [2024-07-22 13:10:19.613747] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.221 [2024-07-22 13:10:19.613783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.221 [2024-07-22 13:10:19.613811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:00.221 [2024-07-22 13:10:19.617305] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.221 [2024-07-22 13:10:19.617340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.221 [2024-07-22 13:10:19.617369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:00.221 [2024-07-22 13:10:19.620506] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.221 [2024-07-22 13:10:19.620540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.221 [2024-07-22 13:10:19.620568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:00.221 [2024-07-22 13:10:19.624026] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.221 [2024-07-22 13:10:19.624061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.221 [2024-07-22 13:10:19.624090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:00.221 [2024-07-22 13:10:19.627325] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.221 [2024-07-22 13:10:19.627359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.221 [2024-07-22 13:10:19.627387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:00.221 [2024-07-22 13:10:19.630516] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.221 [2024-07-22 13:10:19.630564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.221 [2024-07-22 13:10:19.630592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:00.221 [2024-07-22 13:10:19.633912] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.221 [2024-07-22 13:10:19.633950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.221 [2024-07-22 13:10:19.633979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:00.221 [2024-07-22 13:10:19.637290] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.221 [2024-07-22 13:10:19.637329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.221 [2024-07-22 13:10:19.637357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:00.481 [2024-07-22 13:10:19.641418] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.481 [2024-07-22 13:10:19.641456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.481 [2024-07-22 13:10:19.641485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:00.481 [2024-07-22 13:10:19.644431] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.481 [2024-07-22 13:10:19.644500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.481 [2024-07-22 13:10:19.644529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:00.481 [2024-07-22 13:10:19.648546] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.481 [2024-07-22 13:10:19.648585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.481 [2024-07-22 13:10:19.648614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:00.481 [2024-07-22 13:10:19.651880] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.481 [2024-07-22 13:10:19.651916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.481 [2024-07-22 13:10:19.651944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:00.481 [2024-07-22 13:10:19.655734] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.482 [2024-07-22 13:10:19.655923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.482 [2024-07-22 13:10:19.655958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:00.482 [2024-07-22 13:10:19.659970] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.482 [2024-07-22 13:10:19.660182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.482 [2024-07-22 13:10:19.660201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:00.482 [2024-07-22 13:10:19.663486] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.482 [2024-07-22 13:10:19.663523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.482 [2024-07-22 13:10:19.663552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:00.482 [2024-07-22 13:10:19.667537] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.482 [2024-07-22 13:10:19.667574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.482 [2024-07-22 13:10:19.667602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:00.482 [2024-07-22 13:10:19.670899] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.482 [2024-07-22 13:10:19.670951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.482 [2024-07-22 13:10:19.670980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:00.482 [2024-07-22 13:10:19.674353] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.482 [2024-07-22 13:10:19.674388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.482 [2024-07-22 13:10:19.674417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:00.482 [2024-07-22 13:10:19.677670] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.482 [2024-07-22 13:10:19.677704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.482 [2024-07-22 13:10:19.677732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:00.482 [2024-07-22 13:10:19.681207] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.482 [2024-07-22 13:10:19.681241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.482 [2024-07-22 13:10:19.681269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:00.482 [2024-07-22 13:10:19.684388] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.482 [2024-07-22 13:10:19.684422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.482 [2024-07-22 13:10:19.684450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:00.482 [2024-07-22 13:10:19.687975] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.482 [2024-07-22 13:10:19.688011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.482 [2024-07-22 13:10:19.688039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:00.482 [2024-07-22 13:10:19.691596] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.482 [2024-07-22 13:10:19.691631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.482 [2024-07-22 13:10:19.691658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:00.482 [2024-07-22 13:10:19.694965] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.482 [2024-07-22 13:10:19.695016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.482 [2024-07-22 13:10:19.695044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:00.482 [2024-07-22 13:10:19.698350] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.482 [2024-07-22 13:10:19.698385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.482 [2024-07-22 13:10:19.698413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:00.482 [2024-07-22 13:10:19.701885] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.482 [2024-07-22 13:10:19.701921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.482 [2024-07-22 13:10:19.701950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:00.482 [2024-07-22 13:10:19.705072] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.482 [2024-07-22 13:10:19.705108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.482 [2024-07-22 13:10:19.705136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:00.482 [2024-07-22 13:10:19.708993] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.482 [2024-07-22 13:10:19.709029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.482 [2024-07-22 13:10:19.709057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:00.482 [2024-07-22 13:10:19.712601] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.482 [2024-07-22 13:10:19.712636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.482 [2024-07-22 13:10:19.712664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:00.482 [2024-07-22 13:10:19.716326] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.482 [2024-07-22 13:10:19.716361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.482 [2024-07-22 13:10:19.716389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:00.482 [2024-07-22 13:10:19.719867] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.482 [2024-07-22 13:10:19.719903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.482 [2024-07-22 13:10:19.719931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:00.482 [2024-07-22 13:10:19.723106] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.482 [2024-07-22 13:10:19.723182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.482 [2024-07-22 13:10:19.723197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:00.482 [2024-07-22 13:10:19.726287] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.482 [2024-07-22 13:10:19.726322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.482 [2024-07-22 13:10:19.726350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:00.482 [2024-07-22 13:10:19.729894] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.482 [2024-07-22 13:10:19.729929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.482 [2024-07-22 13:10:19.729957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:00.482 [2024-07-22 13:10:19.733576] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.482 [2024-07-22 13:10:19.733610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.482 [2024-07-22 13:10:19.733638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:00.482 [2024-07-22 13:10:19.737179] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.482 [2024-07-22 13:10:19.737212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.482 [2024-07-22 13:10:19.737240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:00.482 [2024-07-22 13:10:19.740205] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.482 [2024-07-22 13:10:19.740243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.482 [2024-07-22 13:10:19.740271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:00.482 [2024-07-22 13:10:19.743740] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.482 [2024-07-22 13:10:19.743774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.482 [2024-07-22 13:10:19.743801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:00.482 [2024-07-22 13:10:19.747321] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.482 [2024-07-22 13:10:19.747353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.482 [2024-07-22 13:10:19.747381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:00.483 [2024-07-22 13:10:19.750174] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.483 [2024-07-22 13:10:19.750208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.483 [2024-07-22 13:10:19.750235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:00.483 [2024-07-22 13:10:19.753487] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.483 [2024-07-22 13:10:19.753537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.483 [2024-07-22 13:10:19.753565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:00.483 [2024-07-22 13:10:19.757059] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.483 [2024-07-22 13:10:19.757092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.483 [2024-07-22 13:10:19.757120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:00.483 [2024-07-22 13:10:19.760132] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.483 [2024-07-22 13:10:19.760174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.483 [2024-07-22 13:10:19.760201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:00.483 [2024-07-22 13:10:19.763248] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.483 [2024-07-22 13:10:19.763281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.483 [2024-07-22 13:10:19.763308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:00.483 [2024-07-22 13:10:19.766238] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.483 [2024-07-22 13:10:19.766272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.483 [2024-07-22 13:10:19.766299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:00.483 [2024-07-22 13:10:19.769552] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.483 [2024-07-22 13:10:19.769586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.483 [2024-07-22 13:10:19.769615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:00.483 [2024-07-22 13:10:19.773143] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.483 [2024-07-22 13:10:19.773203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.483 [2024-07-22 13:10:19.773231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:00.483 [2024-07-22 13:10:19.776871] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.483 [2024-07-22 13:10:19.776906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.483 [2024-07-22 13:10:19.776935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:00.483 [2024-07-22 13:10:19.779972] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.483 [2024-07-22 13:10:19.780005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.483 [2024-07-22 13:10:19.780033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:00.483 [2024-07-22 13:10:19.783432] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.483 [2024-07-22 13:10:19.783467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.483 [2024-07-22 13:10:19.783496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:00.483 [2024-07-22 13:10:19.786663] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.483 [2024-07-22 13:10:19.786699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.483 [2024-07-22 13:10:19.786713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:00.483 [2024-07-22 13:10:19.789875] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.483 [2024-07-22 13:10:19.789907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.483 [2024-07-22 13:10:19.789935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:00.483 [2024-07-22 13:10:19.792623] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.483 [2024-07-22 13:10:19.792657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.483 [2024-07-22 13:10:19.792685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:00.483 [2024-07-22 13:10:19.796493] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.483 [2024-07-22 13:10:19.796531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.483 [2024-07-22 13:10:19.796559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:00.483 [2024-07-22 13:10:19.799705] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.483 [2024-07-22 13:10:19.799769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.483 [2024-07-22 13:10:19.799804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:00.483 [2024-07-22 13:10:19.804397] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.483 [2024-07-22 13:10:19.804433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.483 [2024-07-22 13:10:19.804461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:00.483 [2024-07-22 13:10:19.808061] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.483 [2024-07-22 13:10:19.808096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.483 [2024-07-22 13:10:19.808124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:00.483 [2024-07-22 13:10:19.812038] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.483 [2024-07-22 13:10:19.812072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.483 [2024-07-22 13:10:19.812101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:00.483 [2024-07-22 13:10:19.814833] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.483 [2024-07-22 13:10:19.814870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.483 [2024-07-22 13:10:19.814884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:00.483 [2024-07-22 13:10:19.818680] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.483 [2024-07-22 13:10:19.818886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.483 [2024-07-22 13:10:19.819065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:00.483 [2024-07-22 13:10:19.822260] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.483 [2024-07-22 13:10:19.822296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.483 [2024-07-22 13:10:19.822325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:00.483 [2024-07-22 13:10:19.825950] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.483 [2024-07-22 13:10:19.825984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.483 [2024-07-22 13:10:19.826012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:00.483 [2024-07-22 13:10:19.829375] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.483 [2024-07-22 13:10:19.829408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.483 [2024-07-22 13:10:19.829436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:00.483 [2024-07-22 13:10:19.833019] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.483 [2024-07-22 13:10:19.833054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.483 [2024-07-22 13:10:19.833082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:00.483 [2024-07-22 13:10:19.836749] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.483 [2024-07-22 13:10:19.836783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.483 [2024-07-22 13:10:19.836811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:00.483 [2024-07-22 13:10:19.840062] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.483 [2024-07-22 13:10:19.840096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.484 [2024-07-22 13:10:19.840124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:00.484 [2024-07-22 13:10:19.843558] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.484 [2024-07-22 13:10:19.843623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.484 [2024-07-22 13:10:19.843650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:00.484 [2024-07-22 13:10:19.846756] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.484 [2024-07-22 13:10:19.846791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.484 [2024-07-22 13:10:19.846819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:00.484 [2024-07-22 13:10:19.850163] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.484 [2024-07-22 13:10:19.850195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.484 [2024-07-22 13:10:19.850222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:00.484 [2024-07-22 13:10:19.853223] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.484 [2024-07-22 13:10:19.853255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.484 [2024-07-22 13:10:19.853283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:00.484 [2024-07-22 13:10:19.856783] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.484 [2024-07-22 13:10:19.856816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.484 [2024-07-22 13:10:19.856844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:00.484 [2024-07-22 13:10:19.860709] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.484 [2024-07-22 13:10:19.860891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.484 [2024-07-22 13:10:19.860923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:00.484 [2024-07-22 13:10:19.864716] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.484 [2024-07-22 13:10:19.864902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.484 [2024-07-22 13:10:19.865053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:00.484 [2024-07-22 13:10:19.869132] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.484 [2024-07-22 13:10:19.869342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.484 [2024-07-22 13:10:19.869484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:00.484 [2024-07-22 13:10:19.873242] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.484 [2024-07-22 13:10:19.873410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.484 [2024-07-22 13:10:19.873441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:00.484 [2024-07-22 13:10:19.877167] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.484 [2024-07-22 13:10:19.877200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.484 [2024-07-22 13:10:19.877228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:00.484 [2024-07-22 13:10:19.880930] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.484 [2024-07-22 13:10:19.880965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.484 [2024-07-22 13:10:19.880993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:00.484 [2024-07-22 13:10:19.884217] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.484 [2024-07-22 13:10:19.884259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.484 [2024-07-22 13:10:19.884286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:00.484 [2024-07-22 13:10:19.888088] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.484 [2024-07-22 13:10:19.888123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.484 [2024-07-22 13:10:19.888161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:00.484 [2024-07-22 13:10:19.891091] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.484 [2024-07-22 13:10:19.891124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.484 [2024-07-22 13:10:19.891176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:00.484 [2024-07-22 13:10:19.894255] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.484 [2024-07-22 13:10:19.894288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.484 [2024-07-22 13:10:19.894316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:00.484 [2024-07-22 13:10:19.897580] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.484 [2024-07-22 13:10:19.897616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.484 [2024-07-22 13:10:19.897645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:00.744 [2024-07-22 13:10:19.901270] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.744 [2024-07-22 13:10:19.901309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.744 [2024-07-22 13:10:19.901338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:00.744 [2024-07-22 13:10:19.905247] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.744 [2024-07-22 13:10:19.905286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.744 [2024-07-22 13:10:19.905330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:00.744 [2024-07-22 13:10:19.908069] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.744 [2024-07-22 13:10:19.908106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.744 [2024-07-22 13:10:19.908149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:00.744 [2024-07-22 13:10:19.912052] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.744 [2024-07-22 13:10:19.912089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.744 [2024-07-22 13:10:19.912118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:00.744 [2024-07-22 13:10:19.915734] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.744 [2024-07-22 13:10:19.915768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.744 [2024-07-22 13:10:19.915796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:00.744 [2024-07-22 13:10:19.919200] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.744 [2024-07-22 13:10:19.919257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.744 [2024-07-22 13:10:19.919272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:00.744 [2024-07-22 13:10:19.922762] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.744 [2024-07-22 13:10:19.922801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.744 [2024-07-22 13:10:19.922831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:00.744 [2024-07-22 13:10:19.926262] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.744 [2024-07-22 13:10:19.926297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.744 [2024-07-22 13:10:19.926325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:00.744 [2024-07-22 13:10:19.929904] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.744 [2024-07-22 13:10:19.929938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.744 [2024-07-22 13:10:19.929966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:00.744 [2024-07-22 13:10:19.933413] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.744 [2024-07-22 13:10:19.933447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.744 [2024-07-22 13:10:19.933475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:00.744 [2024-07-22 13:10:19.936951] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.744 [2024-07-22 13:10:19.936984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.744 [2024-07-22 13:10:19.937012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:00.744 [2024-07-22 13:10:19.940194] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.744 [2024-07-22 13:10:19.940226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.744 [2024-07-22 13:10:19.940253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:00.744 [2024-07-22 13:10:19.943500] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.744 [2024-07-22 13:10:19.943535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.744 [2024-07-22 13:10:19.943579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:00.744 [2024-07-22 13:10:19.946722] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.744 [2024-07-22 13:10:19.946760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.744 [2024-07-22 13:10:19.946789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:00.744 [2024-07-22 13:10:19.950260] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.744 [2024-07-22 13:10:19.950293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.744 [2024-07-22 13:10:19.950320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:00.744 [2024-07-22 13:10:19.953465] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.744 [2024-07-22 13:10:19.953498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.744 [2024-07-22 13:10:19.953527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:00.744 [2024-07-22 13:10:19.956744] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.744 [2024-07-22 13:10:19.956778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.744 [2024-07-22 13:10:19.956806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:00.745 [2024-07-22 13:10:19.960550] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.745 [2024-07-22 13:10:19.960584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.745 [2024-07-22 13:10:19.960612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:00.745 [2024-07-22 13:10:19.963870] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.745 [2024-07-22 13:10:19.963905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.745 [2024-07-22 13:10:19.963933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:00.745 [2024-07-22 13:10:19.967830] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.745 [2024-07-22 13:10:19.967867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.745 [2024-07-22 13:10:19.967895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:00.745 [2024-07-22 13:10:19.971387] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.745 [2024-07-22 13:10:19.971423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.745 [2024-07-22 13:10:19.971452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:00.745 [2024-07-22 13:10:19.975828] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.745 [2024-07-22 13:10:19.975865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.745 [2024-07-22 13:10:19.975893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:00.745 [2024-07-22 13:10:19.979736] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.745 [2024-07-22 13:10:19.979772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.745 [2024-07-22 13:10:19.979800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:00.745 [2024-07-22 13:10:19.983579] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.745 [2024-07-22 13:10:19.983614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.745 [2024-07-22 13:10:19.983642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:00.745 [2024-07-22 13:10:19.987357] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.745 [2024-07-22 13:10:19.987396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.745 [2024-07-22 13:10:19.987426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:00.745 [2024-07-22 13:10:19.991305] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.745 [2024-07-22 13:10:19.991343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.745 [2024-07-22 13:10:19.991373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:00.745 [2024-07-22 13:10:19.994857] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.745 [2024-07-22 13:10:19.994895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.745 [2024-07-22 13:10:19.994920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:00.745 [2024-07-22 13:10:19.998439] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.745 [2024-07-22 13:10:19.998474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.745 [2024-07-22 13:10:19.998502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:00.745 [2024-07-22 13:10:20.002130] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.745 [2024-07-22 13:10:20.002191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.745 [2024-07-22 13:10:20.002220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:00.745 [2024-07-22 13:10:20.005987] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.745 [2024-07-22 13:10:20.006022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.745 [2024-07-22 13:10:20.006051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:00.745 [2024-07-22 13:10:20.009460] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.745 [2024-07-22 13:10:20.009494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.745 [2024-07-22 13:10:20.009521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:00.745 [2024-07-22 13:10:20.012596] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.745 [2024-07-22 13:10:20.012630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.745 [2024-07-22 13:10:20.012658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:00.745 [2024-07-22 13:10:20.016043] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.745 [2024-07-22 13:10:20.016078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.745 [2024-07-22 13:10:20.016106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:00.745 [2024-07-22 13:10:20.019313] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.745 [2024-07-22 13:10:20.019347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.745 [2024-07-22 13:10:20.019375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:00.745 [2024-07-22 13:10:20.022908] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.745 [2024-07-22 13:10:20.022973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.745 [2024-07-22 13:10:20.023001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:00.745 [2024-07-22 13:10:20.026126] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.745 [2024-07-22 13:10:20.026181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.745 [2024-07-22 13:10:20.026196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:00.745 [2024-07-22 13:10:20.029765] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.745 [2024-07-22 13:10:20.029801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.745 [2024-07-22 13:10:20.029830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:00.745 [2024-07-22 13:10:20.033140] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.745 [2024-07-22 13:10:20.033184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.745 [2024-07-22 13:10:20.033212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:00.745 [2024-07-22 13:10:20.036752] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.745 [2024-07-22 13:10:20.036787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.745 [2024-07-22 13:10:20.036816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:00.745 [2024-07-22 13:10:20.040235] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.745 [2024-07-22 13:10:20.040271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.745 [2024-07-22 13:10:20.040299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:00.745 [2024-07-22 13:10:20.043688] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.745 [2024-07-22 13:10:20.043722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.745 [2024-07-22 13:10:20.043751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:00.745 [2024-07-22 13:10:20.047109] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.745 [2024-07-22 13:10:20.047168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.745 [2024-07-22 13:10:20.047182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:00.745 [2024-07-22 13:10:20.051109] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.745 [2024-07-22 13:10:20.051164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.745 [2024-07-22 13:10:20.051179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:00.745 [2024-07-22 13:10:20.054819] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.745 [2024-07-22 13:10:20.054873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.745 [2024-07-22 13:10:20.054897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:00.745 [2024-07-22 13:10:20.058456] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.746 [2024-07-22 13:10:20.058496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.746 [2024-07-22 13:10:20.058541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:00.746 [2024-07-22 13:10:20.063434] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.746 [2024-07-22 13:10:20.063472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.746 [2024-07-22 13:10:20.063502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:00.746 [2024-07-22 13:10:20.067785] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.746 [2024-07-22 13:10:20.067823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.746 [2024-07-22 13:10:20.067851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:00.746 [2024-07-22 13:10:20.071509] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.746 [2024-07-22 13:10:20.071545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.746 [2024-07-22 13:10:20.071574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:00.746 [2024-07-22 13:10:20.075487] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.746 [2024-07-22 13:10:20.075524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.746 [2024-07-22 13:10:20.075553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:00.746 [2024-07-22 13:10:20.078836] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.746 [2024-07-22 13:10:20.078877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.746 [2024-07-22 13:10:20.078892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:00.746 [2024-07-22 13:10:20.082757] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.746 [2024-07-22 13:10:20.082797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.746 [2024-07-22 13:10:20.082826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:00.746 [2024-07-22 13:10:20.086385] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.746 [2024-07-22 13:10:20.086422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.746 [2024-07-22 13:10:20.086451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:00.746 [2024-07-22 13:10:20.090491] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.746 [2024-07-22 13:10:20.090530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.746 [2024-07-22 13:10:20.090574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:00.746 [2024-07-22 13:10:20.094143] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.746 [2024-07-22 13:10:20.094189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.746 [2024-07-22 13:10:20.094218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:00.746 [2024-07-22 13:10:20.097802] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.746 [2024-07-22 13:10:20.097838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.746 [2024-07-22 13:10:20.097866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:00.746 [2024-07-22 13:10:20.101952] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.746 [2024-07-22 13:10:20.102176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.746 [2024-07-22 13:10:20.102311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:00.746 [2024-07-22 13:10:20.105679] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.746 [2024-07-22 13:10:20.105715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.746 [2024-07-22 13:10:20.105743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:00.746 [2024-07-22 13:10:20.109068] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.746 [2024-07-22 13:10:20.109106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.746 [2024-07-22 13:10:20.109135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:00.746 [2024-07-22 13:10:20.113283] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.746 [2024-07-22 13:10:20.113322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.746 [2024-07-22 13:10:20.113351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:00.746 [2024-07-22 13:10:20.116842] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.746 [2024-07-22 13:10:20.116881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.746 [2024-07-22 13:10:20.116911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:00.746 [2024-07-22 13:10:20.120941] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.746 [2024-07-22 13:10:20.120979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.746 [2024-07-22 13:10:20.121008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:00.746 [2024-07-22 13:10:20.124770] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.746 [2024-07-22 13:10:20.124810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.746 [2024-07-22 13:10:20.124825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:00.746 [2024-07-22 13:10:20.128899] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.746 [2024-07-22 13:10:20.128934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.746 [2024-07-22 13:10:20.128963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:00.746 [2024-07-22 13:10:20.133083] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.746 [2024-07-22 13:10:20.133119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.746 [2024-07-22 13:10:20.133196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:00.746 [2024-07-22 13:10:20.136815] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.746 [2024-07-22 13:10:20.136852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.746 [2024-07-22 13:10:20.136883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:00.746 [2024-07-22 13:10:20.140788] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.746 [2024-07-22 13:10:20.140825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.746 [2024-07-22 13:10:20.140854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:00.746 [2024-07-22 13:10:20.144639] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.746 [2024-07-22 13:10:20.144675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.746 [2024-07-22 13:10:20.144704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:00.746 [2024-07-22 13:10:20.148202] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.746 [2024-07-22 13:10:20.148236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.746 [2024-07-22 13:10:20.148265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:00.746 [2024-07-22 13:10:20.151496] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.746 [2024-07-22 13:10:20.151546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.746 [2024-07-22 13:10:20.151574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:00.746 [2024-07-22 13:10:20.155173] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.746 [2024-07-22 13:10:20.155213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.746 [2024-07-22 13:10:20.155227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:00.746 [2024-07-22 13:10:20.159266] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.746 [2024-07-22 13:10:20.159303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.746 [2024-07-22 13:10:20.159317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:00.746 [2024-07-22 13:10:20.162906] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:00.747 [2024-07-22 13:10:20.162976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:00.747 [2024-07-22 13:10:20.163018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:01.005 [2024-07-22 13:10:20.166911] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:01.005 [2024-07-22 13:10:20.166969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:01.005 [2024-07-22 13:10:20.166984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:01.005 [2024-07-22 13:10:20.171141] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:01.005 [2024-07-22 13:10:20.171236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:01.005 [2024-07-22 13:10:20.171252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:01.005 [2024-07-22 13:10:20.175604] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:01.005 [2024-07-22 13:10:20.175640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:01.005 [2024-07-22 13:10:20.175669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:01.005 [2024-07-22 13:10:20.179003] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:01.005 [2024-07-22 13:10:20.179054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:01.005 [2024-07-22 13:10:20.179082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:01.005 [2024-07-22 13:10:20.182473] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:01.005 [2024-07-22 13:10:20.182522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:01.005 [2024-07-22 13:10:20.182535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:01.006 [2024-07-22 13:10:20.186464] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:01.006 [2024-07-22 13:10:20.186502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:01.006 [2024-07-22 13:10:20.186516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:01.006 [2024-07-22 13:10:20.190097] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:01.006 [2024-07-22 13:10:20.190133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:01.006 [2024-07-22 13:10:20.190171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:01.006 [2024-07-22 13:10:20.193409] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:01.006 [2024-07-22 13:10:20.193443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:01.006 [2024-07-22 13:10:20.193472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:01.006 [2024-07-22 13:10:20.196866] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:01.006 [2024-07-22 13:10:20.196901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:01.006 [2024-07-22 13:10:20.196929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:01.006 [2024-07-22 13:10:20.200570] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:01.006 [2024-07-22 13:10:20.200606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:01.006 [2024-07-22 13:10:20.200635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:01.006 [2024-07-22 13:10:20.204079] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:01.006 [2024-07-22 13:10:20.204114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:01.006 [2024-07-22 13:10:20.204142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:01.006 [2024-07-22 13:10:20.207488] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:01.006 [2024-07-22 13:10:20.207539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:01.006 [2024-07-22 13:10:20.207567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:01.006 [2024-07-22 13:10:20.211260] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:01.006 [2024-07-22 13:10:20.211295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:01.006 [2024-07-22 13:10:20.211309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:01.006 [2024-07-22 13:10:20.214389] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:01.006 [2024-07-22 13:10:20.214423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:01.006 [2024-07-22 13:10:20.214451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:01.006 [2024-07-22 13:10:20.217826] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:01.006 [2024-07-22 13:10:20.217861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:01.006 [2024-07-22 13:10:20.217889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:01.006 [2024-07-22 13:10:20.221322] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:01.006 [2024-07-22 13:10:20.221357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:01.006 [2024-07-22 13:10:20.221385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:01.006 [2024-07-22 13:10:20.224567] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:01.006 [2024-07-22 13:10:20.224602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:01.006 [2024-07-22 13:10:20.224630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:01.006 [2024-07-22 13:10:20.228247] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:01.006 [2024-07-22 13:10:20.228282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:01.006 [2024-07-22 13:10:20.228295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:01.006 [2024-07-22 13:10:20.231897] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:01.006 [2024-07-22 13:10:20.231932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:01.006 [2024-07-22 13:10:20.231960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:01.006 [2024-07-22 13:10:20.234981] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:01.006 [2024-07-22 13:10:20.235017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:01.006 [2024-07-22 13:10:20.235045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:01.006 [2024-07-22 13:10:20.237846] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:01.006 [2024-07-22 13:10:20.237880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:01.006 [2024-07-22 13:10:20.237908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:01.006 [2024-07-22 13:10:20.241118] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:01.006 [2024-07-22 13:10:20.241177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:01.006 [2024-07-22 13:10:20.241190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:01.006 [2024-07-22 13:10:20.244865] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:01.006 [2024-07-22 13:10:20.244900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:01.006 [2024-07-22 13:10:20.244928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:01.006 [2024-07-22 13:10:20.248254] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:01.006 [2024-07-22 13:10:20.248288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:01.006 [2024-07-22 13:10:20.248316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:01.006 [2024-07-22 13:10:20.251515] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:01.006 [2024-07-22 13:10:20.251549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:01.006 [2024-07-22 13:10:20.251577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:01.006 [2024-07-22 13:10:20.255161] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:01.006 [2024-07-22 13:10:20.255221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:01.006 [2024-07-22 13:10:20.255235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:01.006 [2024-07-22 13:10:20.258808] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:01.006 [2024-07-22 13:10:20.258845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:01.006 [2024-07-22 13:10:20.258859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:01.006 [2024-07-22 13:10:20.262961] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:01.006 [2024-07-22 13:10:20.262998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:01.006 [2024-07-22 13:10:20.263027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:01.006 [2024-07-22 13:10:20.267271] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:01.006 [2024-07-22 13:10:20.267305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:01.006 [2024-07-22 13:10:20.267333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:01.006 [2024-07-22 13:10:20.270174] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:01.006 [2024-07-22 13:10:20.270205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:01.006 [2024-07-22 13:10:20.270217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:01.006 [2024-07-22 13:10:20.273368] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:01.006 [2024-07-22 13:10:20.273403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:01.006 [2024-07-22 13:10:20.273431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:01.006 [2024-07-22 13:10:20.277097] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:01.006 [2024-07-22 13:10:20.277132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:01.007 [2024-07-22 13:10:20.277168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:01.007 [2024-07-22 13:10:20.280362] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:01.007 [2024-07-22 13:10:20.280398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:01.007 [2024-07-22 13:10:20.280426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:01.007 [2024-07-22 13:10:20.284012] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:01.007 [2024-07-22 13:10:20.284047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:01.007 [2024-07-22 13:10:20.284074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:01.007 [2024-07-22 13:10:20.287331] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:01.007 [2024-07-22 13:10:20.287366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:01.007 [2024-07-22 13:10:20.287379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:01.007 [2024-07-22 13:10:20.290538] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:01.007 [2024-07-22 13:10:20.290574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:01.007 [2024-07-22 13:10:20.290624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:01.007 [2024-07-22 13:10:20.294161] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14ca640) 00:48:01.007 [2024-07-22 13:10:20.294205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:01.007 [2024-07-22 13:10:20.294233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:01.007 00:48:01.007 Latency(us) 00:48:01.007 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:48:01.007 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:48:01.007 nvme0n1 : 2.00 8532.08 1066.51 0.00 0.00 1872.13 536.20 5064.15 00:48:01.007 =================================================================================================================== 00:48:01.007 Total : 8532.08 1066.51 0.00 0.00 1872.13 536.20 5064.15 00:48:01.007 0 00:48:01.007 13:10:20 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:48:01.007 13:10:20 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:48:01.007 | .driver_specific 00:48:01.007 | .nvme_error 00:48:01.007 | .status_code 00:48:01.007 | .command_transient_transport_error' 00:48:01.007 13:10:20 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:48:01.007 13:10:20 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:48:01.265 13:10:20 -- host/digest.sh@71 -- # (( 550 > 0 )) 00:48:01.265 13:10:20 -- host/digest.sh@73 -- # killprocess 97043 00:48:01.265 13:10:20 -- common/autotest_common.sh@926 -- # '[' -z 97043 ']' 00:48:01.265 13:10:20 -- common/autotest_common.sh@930 -- # kill -0 97043 00:48:01.265 13:10:20 -- common/autotest_common.sh@931 -- # uname 00:48:01.265 13:10:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:48:01.265 13:10:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 97043 00:48:01.265 killing process with pid 97043 00:48:01.265 Received shutdown signal, test time was about 2.000000 seconds 00:48:01.265 00:48:01.265 Latency(us) 00:48:01.265 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:48:01.265 =================================================================================================================== 00:48:01.265 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:48:01.265 13:10:20 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:48:01.265 13:10:20 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:48:01.265 13:10:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 97043' 00:48:01.265 13:10:20 -- common/autotest_common.sh@945 -- # kill 97043 00:48:01.265 13:10:20 -- common/autotest_common.sh@950 -- # wait 97043 00:48:01.523 13:10:20 -- host/digest.sh@113 -- # run_bperf_err randwrite 4096 128 00:48:01.523 13:10:20 -- host/digest.sh@54 -- # local rw bs qd 00:48:01.523 13:10:20 -- host/digest.sh@56 -- # rw=randwrite 00:48:01.523 13:10:20 -- host/digest.sh@56 -- # bs=4096 00:48:01.523 13:10:20 -- host/digest.sh@56 -- # qd=128 00:48:01.523 13:10:20 -- host/digest.sh@58 -- # bperfpid=97128 00:48:01.523 13:10:20 -- host/digest.sh@60 -- # waitforlisten 97128 /var/tmp/bperf.sock 00:48:01.523 13:10:20 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:48:01.524 13:10:20 -- common/autotest_common.sh@819 -- # '[' -z 97128 ']' 00:48:01.524 13:10:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:48:01.524 13:10:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:48:01.524 13:10:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:48:01.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:48:01.524 13:10:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:48:01.524 13:10:20 -- common/autotest_common.sh@10 -- # set +x 00:48:01.524 [2024-07-22 13:10:20.846112] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:48:01.524 [2024-07-22 13:10:20.846442] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97128 ] 00:48:01.782 [2024-07-22 13:10:20.984007] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:01.782 [2024-07-22 13:10:21.044814] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:48:02.714 13:10:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:48:02.714 13:10:21 -- common/autotest_common.sh@852 -- # return 0 00:48:02.714 13:10:21 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:48:02.714 13:10:21 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:48:02.714 13:10:22 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:48:02.714 13:10:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:48:02.714 13:10:22 -- common/autotest_common.sh@10 -- # set +x 00:48:02.714 13:10:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:48:02.714 13:10:22 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:48:02.714 13:10:22 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:48:02.972 nvme0n1 00:48:02.972 13:10:22 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:48:02.972 13:10:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:48:02.972 13:10:22 -- common/autotest_common.sh@10 -- # set +x 00:48:02.972 13:10:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:48:03.230 13:10:22 -- host/digest.sh@69 -- # bperf_py perform_tests 00:48:03.230 13:10:22 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:48:03.230 Running I/O for 2 seconds... 00:48:03.230 [2024-07-22 13:10:22.505507] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190eea00 00:48:03.230 [2024-07-22 13:10:22.506809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:03.230 [2024-07-22 13:10:22.506854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:03.230 [2024-07-22 13:10:22.516629] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190fb048 00:48:03.230 [2024-07-22 13:10:22.517948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:7148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:03.230 [2024-07-22 13:10:22.518000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:03.230 [2024-07-22 13:10:22.527197] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190f1868 00:48:03.230 [2024-07-22 13:10:22.528008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:6508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:03.230 [2024-07-22 13:10:22.528057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:03.230 [2024-07-22 13:10:22.537099] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190f9b30 00:48:03.230 [2024-07-22 13:10:22.537891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:03.230 [2024-07-22 13:10:22.537941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:48:03.231 [2024-07-22 13:10:22.547255] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190f2510 00:48:03.231 [2024-07-22 13:10:22.547997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:2936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:03.231 [2024-07-22 13:10:22.548030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:48:03.231 [2024-07-22 13:10:22.557086] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190f8e88 00:48:03.231 [2024-07-22 13:10:22.557879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:14293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:03.231 [2024-07-22 13:10:22.557928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:48:03.231 [2024-07-22 13:10:22.567137] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190f31b8 00:48:03.231 [2024-07-22 13:10:22.567943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:17486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:03.231 [2024-07-22 13:10:22.567992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:48:03.231 [2024-07-22 13:10:22.577242] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190f81e0 00:48:03.231 [2024-07-22 13:10:22.578004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:3009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:03.231 [2024-07-22 13:10:22.578052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:48:03.231 [2024-07-22 13:10:22.587263] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190f3e60 00:48:03.231 [2024-07-22 13:10:22.588087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:16087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:03.231 [2024-07-22 13:10:22.588161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:48:03.231 [2024-07-22 13:10:22.597564] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190df988 00:48:03.231 [2024-07-22 13:10:22.598987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:03.231 [2024-07-22 13:10:22.599025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:48:03.231 [2024-07-22 13:10:22.607943] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190e73e0 00:48:03.231 [2024-07-22 13:10:22.608646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:9767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:03.231 [2024-07-22 13:10:22.608692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:48:03.231 [2024-07-22 13:10:22.620200] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190de470 00:48:03.231 [2024-07-22 13:10:22.621415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:5934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:03.231 [2024-07-22 13:10:22.621461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:48:03.231 [2024-07-22 13:10:22.627341] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190f8618 00:48:03.231 [2024-07-22 13:10:22.627722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:03.231 [2024-07-22 13:10:22.627755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:03.231 [2024-07-22 13:10:22.639021] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190ebb98 00:48:03.231 [2024-07-22 13:10:22.640050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:20682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:03.231 [2024-07-22 13:10:22.640095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:48:03.231 [2024-07-22 13:10:22.645993] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190f1430 00:48:03.231 [2024-07-22 13:10:22.646116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:17640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:03.231 [2024-07-22 13:10:22.646135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:48:03.489 [2024-07-22 13:10:22.657213] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190e9e10 00:48:03.490 [2024-07-22 13:10:22.658326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:18006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:03.490 [2024-07-22 13:10:22.658379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:48:03.490 [2024-07-22 13:10:22.666668] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190df118 00:48:03.490 [2024-07-22 13:10:22.667805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:5321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:03.490 [2024-07-22 13:10:22.667854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:48:03.490 [2024-07-22 13:10:22.676096] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190eaab8 00:48:03.490 [2024-07-22 13:10:22.677199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:19950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:03.490 [2024-07-22 13:10:22.677255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:48:03.490 [2024-07-22 13:10:22.686391] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190f3e60 00:48:03.490 [2024-07-22 13:10:22.686964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:9008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:03.490 [2024-07-22 13:10:22.686997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:48:03.490 [2024-07-22 13:10:22.698165] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190eb328 00:48:03.490 [2024-07-22 13:10:22.699371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:15284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:03.490 [2024-07-22 13:10:22.699417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:48:03.490 [2024-07-22 13:10:22.705524] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190ed920 00:48:03.490 [2024-07-22 13:10:22.705793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:19482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:03.490 [2024-07-22 13:10:22.705859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:48:03.490 [2024-07-22 13:10:22.717498] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190f6458 00:48:03.490 [2024-07-22 13:10:22.718474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:12344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:03.490 [2024-07-22 13:10:22.718532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:48:03.490 [2024-07-22 13:10:22.727121] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190f92c0 00:48:03.490 [2024-07-22 13:10:22.728476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:03.490 [2024-07-22 13:10:22.728524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:48:03.490 [2024-07-22 13:10:22.737515] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190dfdc0 00:48:03.490 [2024-07-22 13:10:22.738227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:10739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:03.490 [2024-07-22 13:10:22.738287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:48:03.490 [2024-07-22 13:10:22.750201] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190f35f0 00:48:03.490 [2024-07-22 13:10:22.751493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:18508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:03.490 [2024-07-22 13:10:22.751544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:03.490 [2024-07-22 13:10:22.757492] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190e4140 00:48:03.490 [2024-07-22 13:10:22.757908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:4524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:03.490 [2024-07-22 13:10:22.757943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:48:03.490 [2024-07-22 13:10:22.767661] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190f4f40 00:48:03.490 [2024-07-22 13:10:22.768141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:6209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:03.490 [2024-07-22 13:10:22.768184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:48:03.490 [2024-07-22 13:10:22.777902] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190f6890 00:48:03.490 [2024-07-22 13:10:22.779065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:22000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:03.490 [2024-07-22 13:10:22.779131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:48:03.490 [2024-07-22 13:10:22.788533] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190de038 00:48:03.490 [2024-07-22 13:10:22.789639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:22260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:03.490 [2024-07-22 13:10:22.789686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:48:03.490 [2024-07-22 13:10:22.799737] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190ef270 00:48:03.490 [2024-07-22 13:10:22.800869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:03.490 [2024-07-22 13:10:22.800916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:48:03.490 [2024-07-22 13:10:22.808642] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190eb760 00:48:03.490 [2024-07-22 13:10:22.809807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:8632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:03.490 [2024-07-22 13:10:22.809854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:48:03.490 [2024-07-22 13:10:22.818497] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190fa7d8 00:48:03.490 [2024-07-22 13:10:22.819633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:2228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:03.490 [2024-07-22 13:10:22.819680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:48:03.490 [2024-07-22 13:10:22.829739] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190f6890 00:48:03.490 [2024-07-22 13:10:22.830878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:3931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:03.490 [2024-07-22 13:10:22.830912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:48:03.490 [2024-07-22 13:10:22.838321] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190fbcf0 00:48:03.490 [2024-07-22 13:10:22.839639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:23052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:03.490 [2024-07-22 13:10:22.839686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:48:03.490 [2024-07-22 13:10:22.848334] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190e6b70 00:48:03.490 [2024-07-22 13:10:22.849007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:12309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:03.490 [2024-07-22 13:10:22.849054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:48:03.490 [2024-07-22 13:10:22.858466] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190f8618 00:48:03.490 [2024-07-22 13:10:22.859226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:15740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:03.490 [2024-07-22 13:10:22.859275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:48:03.490 [2024-07-22 13:10:22.868064] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190fac10 00:48:03.490 [2024-07-22 13:10:22.869512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:13432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:03.490 [2024-07-22 13:10:22.869560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:48:03.490 [2024-07-22 13:10:22.877663] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190e99d8 00:48:03.490 [2024-07-22 13:10:22.877872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:03.490 [2024-07-22 13:10:22.877891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:48:03.490 [2024-07-22 13:10:22.887929] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190e8088 00:48:03.490 [2024-07-22 13:10:22.888632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:3941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:03.490 [2024-07-22 13:10:22.888712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:48:03.490 [2024-07-22 13:10:22.899205] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190de470 00:48:03.490 [2024-07-22 13:10:22.900506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:15285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:03.490 [2024-07-22 13:10:22.900555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:48:03.490 [2024-07-22 13:10:22.908513] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190f7100 00:48:03.490 [2024-07-22 13:10:22.909935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:7924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:03.490 [2024-07-22 13:10:22.910003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:48:03.749 [2024-07-22 13:10:22.919549] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190fd640 00:48:03.749 [2024-07-22 13:10:22.919962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:7915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:03.749 [2024-07-22 13:10:22.920000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:48:03.749 [2024-07-22 13:10:22.932273] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190e6738 00:48:03.749 [2024-07-22 13:10:22.933418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:8298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:03.749 [2024-07-22 13:10:22.933481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:48:03.749 [2024-07-22 13:10:22.939949] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190de8a8 00:48:03.749 [2024-07-22 13:10:22.940102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:12535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:03.749 [2024-07-22 13:10:22.940121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:48:03.749 [2024-07-22 13:10:22.951785] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190f2948 00:48:03.749 [2024-07-22 13:10:22.952686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:14116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:03.749 [2024-07-22 13:10:22.952733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:48:03.749 [2024-07-22 13:10:22.960969] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190ebb98 00:48:03.749 [2024-07-22 13:10:22.962217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:18928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:03.749 [2024-07-22 13:10:22.962291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:48:03.749 [2024-07-22 13:10:22.970804] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190df550 00:48:03.749 [2024-07-22 13:10:22.971367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:03.749 [2024-07-22 13:10:22.971397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:48:03.749 [2024-07-22 13:10:22.980427] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190e8d30 00:48:03.749 [2024-07-22 13:10:22.981033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:7952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:03.749 [2024-07-22 13:10:22.981066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:48:03.749 [2024-07-22 13:10:22.989491] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190e8d30 00:48:03.749 [2024-07-22 13:10:22.990425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:4883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:03.749 [2024-07-22 13:10:22.990472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:48:03.749 [2024-07-22 13:10:22.999440] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190f6cc8 00:48:03.749 [2024-07-22 13:10:23.000418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:17188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:03.749 [2024-07-22 13:10:23.000470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:48:03.749 [2024-07-22 13:10:23.010541] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190eaef0 00:48:03.749 [2024-07-22 13:10:23.011380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:03.749 [2024-07-22 13:10:23.011432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:03.749 [2024-07-22 13:10:23.019921] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190f6890 00:48:03.749 [2024-07-22 13:10:23.020741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:03.749 [2024-07-22 13:10:23.020789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:48:03.749 [2024-07-22 13:10:23.029261] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190dece0 00:48:03.749 [2024-07-22 13:10:23.030011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:13428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:03.749 [2024-07-22 13:10:23.030058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:48:03.749 [2024-07-22 13:10:23.038741] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190fdeb0 00:48:03.749 [2024-07-22 13:10:23.039553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:24282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:03.749 [2024-07-22 13:10:23.039616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:48:03.749 [2024-07-22 13:10:23.048066] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190e9e10 00:48:03.749 [2024-07-22 13:10:23.048849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:18807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:03.749 [2024-07-22 13:10:23.048895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:48:03.749 [2024-07-22 13:10:23.057633] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190e1710 00:48:03.749 [2024-07-22 13:10:23.058373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:24036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:03.749 [2024-07-22 13:10:23.058420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:48:03.749 [2024-07-22 13:10:23.067582] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190e4140 00:48:03.749 [2024-07-22 13:10:23.068177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:23218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:03.749 [2024-07-22 13:10:23.068232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:48:03.749 [2024-07-22 13:10:23.075603] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190f92c0 00:48:03.749 [2024-07-22 13:10:23.075937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:6293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:03.749 [2024-07-22 13:10:23.075967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:48:03.749 [2024-07-22 13:10:23.086458] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190fd640 00:48:03.749 [2024-07-22 13:10:23.087311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:20611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:03.749 [2024-07-22 13:10:23.087372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:48:03.749 [2024-07-22 13:10:23.096174] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190e1f80 00:48:03.749 [2024-07-22 13:10:23.097097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:03.749 [2024-07-22 13:10:23.097166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:03.749 [2024-07-22 13:10:23.104860] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190f2d80 00:48:03.749 [2024-07-22 13:10:23.105382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:7395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:03.749 [2024-07-22 13:10:23.105414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:48:03.749 [2024-07-22 13:10:23.114309] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190fac10 00:48:03.749 [2024-07-22 13:10:23.114896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:03.749 [2024-07-22 13:10:23.114944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:48:03.749 [2024-07-22 13:10:23.124352] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190e3d08 00:48:03.749 [2024-07-22 13:10:23.125572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:19794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:03.749 [2024-07-22 13:10:23.125617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:03.749 [2024-07-22 13:10:23.134566] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190f7538 00:48:03.749 [2024-07-22 13:10:23.135731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:21067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:03.749 [2024-07-22 13:10:23.135776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:03.749 [2024-07-22 13:10:23.144622] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190fd640 00:48:03.749 [2024-07-22 13:10:23.145513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:03.749 [2024-07-22 13:10:23.145557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:03.749 [2024-07-22 13:10:23.153980] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190e4140 00:48:03.749 [2024-07-22 13:10:23.155825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:15383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:03.749 [2024-07-22 13:10:23.155871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:03.749 [2024-07-22 13:10:23.162582] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190e4de8 00:48:03.749 [2024-07-22 13:10:23.163924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:21185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:03.749 [2024-07-22 13:10:23.163972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:48:04.008 [2024-07-22 13:10:23.174682] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190dece0 00:48:04.008 [2024-07-22 13:10:23.175348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:4626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:04.008 [2024-07-22 13:10:23.175388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:48:04.008 [2024-07-22 13:10:23.187218] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190e3060 00:48:04.008 [2024-07-22 13:10:23.188007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:22509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:04.008 [2024-07-22 13:10:23.188056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:48:04.008 [2024-07-22 13:10:23.198047] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190ebb98 00:48:04.008 [2024-07-22 13:10:23.198930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:13014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:04.008 [2024-07-22 13:10:23.199006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:48:04.008 [2024-07-22 13:10:23.208900] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190e27f0 00:48:04.008 [2024-07-22 13:10:23.209706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:16992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:04.008 [2024-07-22 13:10:23.209755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:48:04.008 [2024-07-22 13:10:23.217660] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190fb8b8 00:48:04.008 [2024-07-22 13:10:23.218569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:12643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:04.008 [2024-07-22 13:10:23.218639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:48:04.008 [2024-07-22 13:10:23.229363] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190f0350 00:48:04.008 [2024-07-22 13:10:23.230417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:24420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:04.008 [2024-07-22 13:10:23.230462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:48:04.008 [2024-07-22 13:10:23.236644] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190feb58 00:48:04.008 [2024-07-22 13:10:23.236772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:9609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:04.008 [2024-07-22 13:10:23.236790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:48:04.008 [2024-07-22 13:10:23.247642] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190e0a68 00:48:04.008 [2024-07-22 13:10:23.248243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:17007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:04.008 [2024-07-22 13:10:23.248276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:48:04.008 [2024-07-22 13:10:23.259160] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190f4f40 00:48:04.008 [2024-07-22 13:10:23.260363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:12123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:04.008 [2024-07-22 13:10:23.260408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:48:04.008 [2024-07-22 13:10:23.265852] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190ed4e8 00:48:04.008 [2024-07-22 13:10:23.267114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:14349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:04.008 [2024-07-22 13:10:23.267200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:48:04.008 [2024-07-22 13:10:23.277111] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190e5658 00:48:04.008 [2024-07-22 13:10:23.277928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:10658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:04.008 [2024-07-22 13:10:23.277974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:48:04.008 [2024-07-22 13:10:23.286833] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190ed920 00:48:04.008 [2024-07-22 13:10:23.287769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:23516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:04.008 [2024-07-22 13:10:23.287813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:04.008 [2024-07-22 13:10:23.295867] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190f6458 00:48:04.008 [2024-07-22 13:10:23.297033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:16198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:04.008 [2024-07-22 13:10:23.297080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:04.008 [2024-07-22 13:10:23.305537] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190f5378 00:48:04.008 [2024-07-22 13:10:23.307103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:24943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:04.008 [2024-07-22 13:10:23.307180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:48:04.008 [2024-07-22 13:10:23.315279] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190eea00 00:48:04.009 [2024-07-22 13:10:23.316292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:21444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:04.009 [2024-07-22 13:10:23.316340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:04.009 [2024-07-22 13:10:23.326914] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190f3a28 00:48:04.009 [2024-07-22 13:10:23.327983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:9288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:04.009 [2024-07-22 13:10:23.328031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:48:04.009 [2024-07-22 13:10:23.334388] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190fda78 00:48:04.009 [2024-07-22 13:10:23.334479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:13983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:04.009 [2024-07-22 13:10:23.334498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:48:04.009 [2024-07-22 13:10:23.345947] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190f46d0 00:48:04.009 [2024-07-22 13:10:23.346734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:7874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:04.009 [2024-07-22 13:10:23.346785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:48:04.009 [2024-07-22 13:10:23.355899] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190f8a50 00:48:04.009 [2024-07-22 13:10:23.356812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:9670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:04.009 [2024-07-22 13:10:23.356858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:48:04.009 [2024-07-22 13:10:23.365188] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190e6b70 00:48:04.009 [2024-07-22 13:10:23.366291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:24579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:04.009 [2024-07-22 13:10:23.366337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:48:04.009 [2024-07-22 13:10:23.374106] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190eaab8 00:48:04.009 [2024-07-22 13:10:23.375104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:14096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:04.009 [2024-07-22 13:10:23.375173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:48:04.009 [2024-07-22 13:10:23.385350] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190e7818 00:48:04.009 [2024-07-22 13:10:23.386287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:21740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:04.009 [2024-07-22 13:10:23.386332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:48:04.009 [2024-07-22 13:10:23.393795] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190e49b0 00:48:04.009 [2024-07-22 13:10:23.394904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:23719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:04.009 [2024-07-22 13:10:23.394950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:48:04.009 [2024-07-22 13:10:23.404141] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190f6cc8 00:48:04.009 [2024-07-22 13:10:23.405849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:11274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:04.009 [2024-07-22 13:10:23.405896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:48:04.009 [2024-07-22 13:10:23.413866] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190e3d08 00:48:04.009 [2024-07-22 13:10:23.414917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:4215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:04.009 [2024-07-22 13:10:23.414991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:48:04.009 [2024-07-22 13:10:23.423882] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190ddc00 00:48:04.009 [2024-07-22 13:10:23.424397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:12421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:04.009 [2024-07-22 13:10:23.424431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:48:04.267 [2024-07-22 13:10:23.437810] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190f7da8 00:48:04.267 [2024-07-22 13:10:23.439086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:20126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:04.267 [2024-07-22 13:10:23.439159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:48:04.267 [2024-07-22 13:10:23.446825] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190de8a8 00:48:04.267 [2024-07-22 13:10:23.448120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:16398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:04.267 [2024-07-22 13:10:23.448196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:48:04.267 [2024-07-22 13:10:23.456861] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190fe2e8 00:48:04.267 [2024-07-22 13:10:23.458034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:04.267 [2024-07-22 13:10:23.458083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:48:04.267 [2024-07-22 13:10:23.468295] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190e73e0 00:48:04.267 [2024-07-22 13:10:23.469444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:24533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:04.267 [2024-07-22 13:10:23.469490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:48:04.267 [2024-07-22 13:10:23.475795] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190e5ec8 00:48:04.268 [2024-07-22 13:10:23.476002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:8461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:04.268 [2024-07-22 13:10:23.476021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:48:04.268 [2024-07-22 13:10:23.488003] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190e95a0 00:48:04.268 [2024-07-22 13:10:23.488949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:9328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:04.268 [2024-07-22 13:10:23.488996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:48:04.268 [2024-07-22 13:10:23.497536] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190df988 00:48:04.268 [2024-07-22 13:10:23.498965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:10370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:04.268 [2024-07-22 13:10:23.499029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:48:04.268 [2024-07-22 13:10:23.507502] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190e5ec8 00:48:04.268 [2024-07-22 13:10:23.508917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:23764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:04.268 [2024-07-22 13:10:23.508965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:48:04.268 [2024-07-22 13:10:23.520600] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190f6cc8 00:48:04.268 [2024-07-22 13:10:23.521742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:6791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:04.268 [2024-07-22 13:10:23.521793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:48:04.268 [2024-07-22 13:10:23.528608] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190f7970 00:48:04.268 [2024-07-22 13:10:23.528767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:10301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:04.268 [2024-07-22 13:10:23.528789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:48:04.268 [2024-07-22 13:10:23.540841] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190e73e0 00:48:04.268 [2024-07-22 13:10:23.541708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:6092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:04.268 [2024-07-22 13:10:23.541787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:48:04.268 [2024-07-22 13:10:23.550244] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190dece0 00:48:04.268 [2024-07-22 13:10:23.551560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:3880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:04.268 [2024-07-22 13:10:23.551608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:48:04.268 [2024-07-22 13:10:23.560337] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190e7818 00:48:04.268 [2024-07-22 13:10:23.560898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:3281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:04.268 [2024-07-22 13:10:23.560946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:48:04.268 [2024-07-22 13:10:23.570350] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190f8a50 00:48:04.268 [2024-07-22 13:10:23.570992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:13111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:04.268 [2024-07-22 13:10:23.571055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:48:04.268 [2024-07-22 13:10:23.579864] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190f8a50 00:48:04.268 [2024-07-22 13:10:23.581222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:11610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:04.268 [2024-07-22 13:10:23.581283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:48:04.268 [2024-07-22 13:10:23.589509] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190f6890 00:48:04.268 [2024-07-22 13:10:23.589616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:23081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:04.268 [2024-07-22 13:10:23.589635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:48:04.268 [2024-07-22 13:10:23.601286] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190f3a28 00:48:04.268 [2024-07-22 13:10:23.603139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:10973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:04.268 [2024-07-22 13:10:23.603241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:04.268 [2024-07-22 13:10:23.613779] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190fc560 00:48:04.268 [2024-07-22 13:10:23.614966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:21450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:04.268 [2024-07-22 13:10:23.615014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:48:04.268 [2024-07-22 13:10:23.622134] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190e99d8 00:48:04.268 [2024-07-22 13:10:23.622325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:3574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:04.268 [2024-07-22 13:10:23.622346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:48:04.268 [2024-07-22 13:10:23.635127] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190ed920 00:48:04.268 [2024-07-22 13:10:23.636029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:21216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:04.268 [2024-07-22 13:10:23.636075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:48:04.268 [2024-07-22 13:10:23.644934] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190ea680 00:48:04.268 [2024-07-22 13:10:23.646189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:19152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:04.268 [2024-07-22 13:10:23.646246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:48:04.268 [2024-07-22 13:10:23.654984] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190eee38 00:48:04.268 [2024-07-22 13:10:23.656377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:14617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:04.268 [2024-07-22 13:10:23.656425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:48:04.268 [2024-07-22 13:10:23.667043] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190fef90 00:48:04.268 [2024-07-22 13:10:23.668877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:10396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:04.268 [2024-07-22 13:10:23.668930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:48:04.268 [2024-07-22 13:10:23.678179] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190f1430 00:48:04.268 [2024-07-22 13:10:23.679106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:25090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:04.268 [2024-07-22 13:10:23.679194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:48:04.527 [2024-07-22 13:10:23.689092] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190eea00 00:48:04.527 [2024-07-22 13:10:23.690047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:04.527 [2024-07-22 13:10:23.690100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:48:04.527 [2024-07-22 13:10:23.698350] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190eff18 00:48:04.527 [2024-07-22 13:10:23.699313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:8434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:04.527 [2024-07-22 13:10:23.699365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:48:04.527 [2024-07-22 13:10:23.708541] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190eff18 00:48:04.527 [2024-07-22 13:10:23.709700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:5239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:04.527 [2024-07-22 13:10:23.709749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:48:04.527 [2024-07-22 13:10:23.718857] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190df550 00:48:04.527 [2024-07-22 13:10:23.719449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:13816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:04.527 [2024-07-22 13:10:23.719499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:48:04.527 [2024-07-22 13:10:23.731653] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190f3e60 00:48:04.527 [2024-07-22 13:10:23.732940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:04.527 [2024-07-22 13:10:23.732988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:48:04.527 [2024-07-22 13:10:23.739463] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190f0788 00:48:04.527 [2024-07-22 13:10:23.739897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:15827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:04.527 [2024-07-22 13:10:23.739941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:48:04.527 [2024-07-22 13:10:23.751177] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190f5378 00:48:04.527 [2024-07-22 13:10:23.752086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:16100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:04.527 [2024-07-22 13:10:23.752134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:48:04.527 [2024-07-22 13:10:23.761346] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190ebb98 00:48:04.527 [2024-07-22 13:10:23.762350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:13530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:04.527 [2024-07-22 13:10:23.762399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:48:04.527 [2024-07-22 13:10:23.771474] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190e1b48 00:48:04.527 [2024-07-22 13:10:23.772785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:10559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:04.527 [2024-07-22 13:10:23.772834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:48:04.527 [2024-07-22 13:10:23.784861] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190ed920 00:48:04.527 [2024-07-22 13:10:23.785825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:13551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:04.527 [2024-07-22 13:10:23.785877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:48:04.527 [2024-07-22 13:10:23.794949] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190feb58 00:48:04.527 [2024-07-22 13:10:23.796372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:13676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:04.527 [2024-07-22 13:10:23.796422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:48:04.527 [2024-07-22 13:10:23.805731] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190e6b70 00:48:04.527 [2024-07-22 13:10:23.806440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:20078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:04.527 [2024-07-22 13:10:23.806505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:48:04.527 [2024-07-22 13:10:23.816335] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190fac10 00:48:04.527 [2024-07-22 13:10:23.817102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:1006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:04.527 [2024-07-22 13:10:23.817174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:48:04.527 [2024-07-22 13:10:23.825366] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190f4b08 00:48:04.527 [2024-07-22 13:10:23.825719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:19490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:04.527 [2024-07-22 13:10:23.825749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:48:04.527 [2024-07-22 13:10:23.837639] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190e6b70 00:48:04.527 [2024-07-22 13:10:23.838688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:4045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:04.527 [2024-07-22 13:10:23.838736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:48:04.527 [2024-07-22 13:10:23.846762] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190e38d0 00:48:04.527 [2024-07-22 13:10:23.847885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:11821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:04.527 [2024-07-22 13:10:23.847931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:48:04.527 [2024-07-22 13:10:23.856731] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190e9e10 00:48:04.527 [2024-07-22 13:10:23.858403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:13571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:04.527 [2024-07-22 13:10:23.858450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:48:04.527 [2024-07-22 13:10:23.866666] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190e0630 00:48:04.527 [2024-07-22 13:10:23.867705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:04.527 [2024-07-22 13:10:23.867751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:04.527 [2024-07-22 13:10:23.876521] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190ee5c8 00:48:04.527 [2024-07-22 13:10:23.877035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:13808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:04.527 [2024-07-22 13:10:23.877069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:48:04.527 [2024-07-22 13:10:23.888638] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190ea680 00:48:04.527 [2024-07-22 13:10:23.889814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:3390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:04.527 [2024-07-22 13:10:23.889859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:48:04.527 [2024-07-22 13:10:23.896107] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190fe720 00:48:04.527 [2024-07-22 13:10:23.896332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:18372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:04.528 [2024-07-22 13:10:23.896351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:48:04.528 [2024-07-22 13:10:23.908170] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190fdeb0 00:48:04.528 [2024-07-22 13:10:23.909108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:14785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:04.528 [2024-07-22 13:10:23.909159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:48:04.528 [2024-07-22 13:10:23.916947] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190df988 00:48:04.528 [2024-07-22 13:10:23.917990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:10740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:04.528 [2024-07-22 13:10:23.918037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:48:04.528 [2024-07-22 13:10:23.927566] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190ef270 00:48:04.528 [2024-07-22 13:10:23.928064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:11383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:04.528 [2024-07-22 13:10:23.928103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:48:04.528 [2024-07-22 13:10:23.939208] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190ff3c8 00:48:04.528 [2024-07-22 13:10:23.940050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:04.528 [2024-07-22 13:10:23.940102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:48:04.786 [2024-07-22 13:10:23.949795] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190ed0b0 00:48:04.786 [2024-07-22 13:10:23.950400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:7516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:04.786 [2024-07-22 13:10:23.950470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:48:04.786 [2024-07-22 13:10:23.958584] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190f31b8 00:48:04.786 [2024-07-22 13:10:23.958894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:4641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:04.786 [2024-07-22 13:10:23.958930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:48:04.786 [2024-07-22 13:10:23.969715] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190f31b8 00:48:04.786 [2024-07-22 13:10:23.970530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:8802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:04.786 [2024-07-22 13:10:23.970578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:48:04.786 [2024-07-22 13:10:23.978288] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190fcdd0 00:48:04.786 [2024-07-22 13:10:23.979215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:18246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:04.786 [2024-07-22 13:10:23.979264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:48:04.786 [2024-07-22 13:10:23.988110] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190e8d30 00:48:04.786 [2024-07-22 13:10:23.988936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:13165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:04.786 [2024-07-22 13:10:23.988987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:48:04.786 [2024-07-22 13:10:23.999284] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190e8d30 00:48:04.786 [2024-07-22 13:10:24.000071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:1780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:04.786 [2024-07-22 13:10:24.000117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:48:04.786 [2024-07-22 13:10:24.008012] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190f7970 00:48:04.786 [2024-07-22 13:10:24.008895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:04.786 [2024-07-22 13:10:24.008959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:48:04.786 [2024-07-22 13:10:24.018456] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190ea680 00:48:04.786 [2024-07-22 13:10:24.019334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:04.786 [2024-07-22 13:10:24.019384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:48:04.786 [2024-07-22 13:10:24.028108] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190dfdc0 00:48:04.787 [2024-07-22 13:10:24.029276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:6979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:04.787 [2024-07-22 13:10:24.029324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:48:04.787 [2024-07-22 13:10:24.039003] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190ea248 00:48:04.787 [2024-07-22 13:10:24.040351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:8473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:04.787 [2024-07-22 13:10:24.040406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:48:04.787 [2024-07-22 13:10:24.049187] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190e23b8 00:48:04.787 [2024-07-22 13:10:24.050118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:8408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:04.787 [2024-07-22 13:10:24.050177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:48:04.787 [2024-07-22 13:10:24.061092] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190f7538 00:48:04.787 [2024-07-22 13:10:24.062021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:20559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:04.787 [2024-07-22 13:10:24.062068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:48:04.787 [2024-07-22 13:10:24.069999] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190fe720 00:48:04.787 [2024-07-22 13:10:24.071081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:16608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:04.787 [2024-07-22 13:10:24.071128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:48:04.787 [2024-07-22 13:10:24.079843] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190f92c0 00:48:04.787 [2024-07-22 13:10:24.081493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:15211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:04.787 [2024-07-22 13:10:24.081541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:48:04.787 [2024-07-22 13:10:24.089751] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190ed0b0 00:48:04.787 [2024-07-22 13:10:24.090696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:04.787 [2024-07-22 13:10:24.090744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:48:04.787 [2024-07-22 13:10:24.099579] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190e3498 00:48:04.787 [2024-07-22 13:10:24.100593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:2028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:04.787 [2024-07-22 13:10:24.100638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:48:04.787 [2024-07-22 13:10:24.110269] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190f7538 00:48:04.787 [2024-07-22 13:10:24.111232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:19185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:04.787 [2024-07-22 13:10:24.111277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:04.787 [2024-07-22 13:10:24.120602] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190f2948 00:48:04.787 [2024-07-22 13:10:24.121402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:1269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:04.787 [2024-07-22 13:10:24.121448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:48:04.787 [2024-07-22 13:10:24.130334] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190e27f0 00:48:04.787 [2024-07-22 13:10:24.131855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:11931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:04.787 [2024-07-22 13:10:24.131903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:48:04.787 [2024-07-22 13:10:24.139885] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190fb480 00:48:04.787 [2024-07-22 13:10:24.140760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:22457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:04.787 [2024-07-22 13:10:24.140823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:48:04.787 [2024-07-22 13:10:24.150944] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190efae0 00:48:04.787 [2024-07-22 13:10:24.151978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:2264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:04.787 [2024-07-22 13:10:24.152023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:04.787 [2024-07-22 13:10:24.158324] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190e95a0 00:48:04.787 [2024-07-22 13:10:24.158406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:13043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:04.787 [2024-07-22 13:10:24.158425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:48:04.787 [2024-07-22 13:10:24.170029] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190e23b8 00:48:04.787 [2024-07-22 13:10:24.170648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:9978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:04.787 [2024-07-22 13:10:24.170684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:48:04.787 [2024-07-22 13:10:24.179987] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190fd640 00:48:04.787 [2024-07-22 13:10:24.180582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:14667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:04.787 [2024-07-22 13:10:24.180618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:48:04.787 [2024-07-22 13:10:24.191396] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190ddc00 00:48:04.787 [2024-07-22 13:10:24.193081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:21666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:04.787 [2024-07-22 13:10:24.193134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:48:04.787 [2024-07-22 13:10:24.204044] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190ed4e8 00:48:04.787 [2024-07-22 13:10:24.206002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:17104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:04.787 [2024-07-22 13:10:24.206079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:48:05.046 [2024-07-22 13:10:24.215571] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190e1f80 00:48:05.046 [2024-07-22 13:10:24.217377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:10407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:05.046 [2024-07-22 13:10:24.217431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:48:05.046 [2024-07-22 13:10:24.226151] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190eaef0 00:48:05.046 [2024-07-22 13:10:24.227887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:21860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:05.046 [2024-07-22 13:10:24.227937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:48:05.046 [2024-07-22 13:10:24.236362] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190f8e88 00:48:05.046 [2024-07-22 13:10:24.237861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:16655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:05.046 [2024-07-22 13:10:24.237909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:48:05.046 [2024-07-22 13:10:24.245501] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190f1430 00:48:05.046 [2024-07-22 13:10:24.246874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:11305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:05.046 [2024-07-22 13:10:24.246927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:48:05.046 [2024-07-22 13:10:24.255843] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190f8a50 00:48:05.046 [2024-07-22 13:10:24.256473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:05.046 [2024-07-22 13:10:24.256511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:48:05.046 [2024-07-22 13:10:24.267441] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190e1b48 00:48:05.046 [2024-07-22 13:10:24.268530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:25078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:05.046 [2024-07-22 13:10:24.268577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:48:05.046 [2024-07-22 13:10:24.276209] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190fac10 00:48:05.046 [2024-07-22 13:10:24.277429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:23440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:05.046 [2024-07-22 13:10:24.277477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:48:05.046 [2024-07-22 13:10:24.286319] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190f4f40 00:48:05.046 [2024-07-22 13:10:24.286964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:1735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:05.046 [2024-07-22 13:10:24.286999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:48:05.046 [2024-07-22 13:10:24.299829] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190eaab8 00:48:05.046 [2024-07-22 13:10:24.301085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:19192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:05.046 [2024-07-22 13:10:24.301163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:48:05.046 [2024-07-22 13:10:24.307315] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190e27f0 00:48:05.046 [2024-07-22 13:10:24.307686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:05.046 [2024-07-22 13:10:24.307722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:48:05.046 [2024-07-22 13:10:24.319291] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190f0bc0 00:48:05.046 [2024-07-22 13:10:24.320334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:05.046 [2024-07-22 13:10:24.320381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:48:05.046 [2024-07-22 13:10:24.328303] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190f20d8 00:48:05.046 [2024-07-22 13:10:24.329476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:1192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:05.046 [2024-07-22 13:10:24.329524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:05.046 [2024-07-22 13:10:24.337236] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190ef6a8 00:48:05.046 [2024-07-22 13:10:24.338174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:4956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:05.046 [2024-07-22 13:10:24.338231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:48:05.046 [2024-07-22 13:10:24.347164] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190ee190 00:48:05.046 [2024-07-22 13:10:24.348333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:9803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:05.046 [2024-07-22 13:10:24.348382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:48:05.046 [2024-07-22 13:10:24.358383] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190ebfd0 00:48:05.046 [2024-07-22 13:10:24.359118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:10745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:05.046 [2024-07-22 13:10:24.359187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:48:05.046 [2024-07-22 13:10:24.369786] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190fc998 00:48:05.046 [2024-07-22 13:10:24.371037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:9206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:05.046 [2024-07-22 13:10:24.371083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:48:05.046 [2024-07-22 13:10:24.377248] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190fa7d8 00:48:05.046 [2024-07-22 13:10:24.377460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:24715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:05.046 [2024-07-22 13:10:24.377478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:48:05.046 [2024-07-22 13:10:24.389391] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190ea680 00:48:05.046 [2024-07-22 13:10:24.390352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:25059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:05.046 [2024-07-22 13:10:24.390398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:48:05.046 [2024-07-22 13:10:24.398769] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190fa3a0 00:48:05.046 [2024-07-22 13:10:24.400071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:6191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:05.046 [2024-07-22 13:10:24.400117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:48:05.046 [2024-07-22 13:10:24.408901] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190fef90 00:48:05.046 [2024-07-22 13:10:24.409590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:24845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:05.046 [2024-07-22 13:10:24.409649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:48:05.046 [2024-07-22 13:10:24.421199] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190e49b0 00:48:05.046 [2024-07-22 13:10:24.422471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:8093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:05.046 [2024-07-22 13:10:24.422516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:48:05.046 [2024-07-22 13:10:24.428603] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190f9f68 00:48:05.046 [2024-07-22 13:10:24.428985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:7588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:05.046 [2024-07-22 13:10:24.429017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:48:05.046 [2024-07-22 13:10:24.440191] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190f0788 00:48:05.046 [2024-07-22 13:10:24.440958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:10519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:05.046 [2024-07-22 13:10:24.441022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:48:05.046 [2024-07-22 13:10:24.450290] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190dfdc0 00:48:05.046 [2024-07-22 13:10:24.451555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:05.046 [2024-07-22 13:10:24.451606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:48:05.046 [2024-07-22 13:10:24.460421] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190f6cc8 00:48:05.046 [2024-07-22 13:10:24.460858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:16218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:05.046 [2024-07-22 13:10:24.460896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:48:05.356 [2024-07-22 13:10:24.471170] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190f5be8 00:48:05.356 [2024-07-22 13:10:24.471643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:10096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:05.356 [2024-07-22 13:10:24.471684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:48:05.356 [2024-07-22 13:10:24.480600] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190e99d8 00:48:05.356 [2024-07-22 13:10:24.481869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:8592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:05.356 [2024-07-22 13:10:24.481917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:05.356 [2024-07-22 13:10:24.491001] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1b50) with pdu=0x2000190ec840 00:48:05.356 [2024-07-22 13:10:24.491468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:48:05.356 [2024-07-22 13:10:24.491504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:48:05.356 00:48:05.356 Latency(us) 00:48:05.356 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:48:05.356 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:48:05.356 nvme0n1 : 2.01 25001.80 97.66 0.00 0.00 5113.22 1809.69 13345.51 00:48:05.356 =================================================================================================================== 00:48:05.356 Total : 25001.80 97.66 0.00 0.00 5113.22 1809.69 13345.51 00:48:05.356 0 00:48:05.356 13:10:24 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:48:05.356 13:10:24 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:48:05.356 13:10:24 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:48:05.356 | .driver_specific 00:48:05.356 | .nvme_error 00:48:05.356 | .status_code 00:48:05.356 | .command_transient_transport_error' 00:48:05.356 13:10:24 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:48:05.356 13:10:24 -- host/digest.sh@71 -- # (( 196 > 0 )) 00:48:05.356 13:10:24 -- host/digest.sh@73 -- # killprocess 97128 00:48:05.356 13:10:24 -- common/autotest_common.sh@926 -- # '[' -z 97128 ']' 00:48:05.356 13:10:24 -- common/autotest_common.sh@930 -- # kill -0 97128 00:48:05.356 13:10:24 -- common/autotest_common.sh@931 -- # uname 00:48:05.356 13:10:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:48:05.356 13:10:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 97128 00:48:05.356 13:10:24 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:48:05.356 13:10:24 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:48:05.356 killing process with pid 97128 00:48:05.356 13:10:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 97128' 00:48:05.356 Received shutdown signal, test time was about 2.000000 seconds 00:48:05.356 00:48:05.356 Latency(us) 00:48:05.356 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:48:05.356 =================================================================================================================== 00:48:05.356 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:48:05.356 13:10:24 -- common/autotest_common.sh@945 -- # kill 97128 00:48:05.356 13:10:24 -- common/autotest_common.sh@950 -- # wait 97128 00:48:05.614 13:10:24 -- host/digest.sh@114 -- # run_bperf_err randwrite 131072 16 00:48:05.614 13:10:24 -- host/digest.sh@54 -- # local rw bs qd 00:48:05.614 13:10:24 -- host/digest.sh@56 -- # rw=randwrite 00:48:05.614 13:10:24 -- host/digest.sh@56 -- # bs=131072 00:48:05.615 13:10:24 -- host/digest.sh@56 -- # qd=16 00:48:05.615 13:10:24 -- host/digest.sh@58 -- # bperfpid=97217 00:48:05.615 13:10:24 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:48:05.615 13:10:24 -- host/digest.sh@60 -- # waitforlisten 97217 /var/tmp/bperf.sock 00:48:05.615 13:10:24 -- common/autotest_common.sh@819 -- # '[' -z 97217 ']' 00:48:05.615 13:10:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:48:05.615 13:10:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:48:05.615 13:10:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:48:05.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:48:05.615 13:10:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:48:05.615 13:10:24 -- common/autotest_common.sh@10 -- # set +x 00:48:05.615 I/O size of 131072 is greater than zero copy threshold (65536). 00:48:05.615 Zero copy mechanism will not be used. 00:48:05.615 [2024-07-22 13:10:25.020262] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:48:05.615 [2024-07-22 13:10:25.020363] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97217 ] 00:48:05.874 [2024-07-22 13:10:25.153988] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:05.874 [2024-07-22 13:10:25.232453] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:48:06.806 13:10:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:48:06.806 13:10:25 -- common/autotest_common.sh@852 -- # return 0 00:48:06.806 13:10:25 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:48:06.806 13:10:25 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:48:06.806 13:10:26 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:48:06.806 13:10:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:48:06.806 13:10:26 -- common/autotest_common.sh@10 -- # set +x 00:48:06.806 13:10:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:48:06.806 13:10:26 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:48:06.806 13:10:26 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:48:07.064 nvme0n1 00:48:07.064 13:10:26 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:48:07.064 13:10:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:48:07.064 13:10:26 -- common/autotest_common.sh@10 -- # set +x 00:48:07.064 13:10:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:48:07.064 13:10:26 -- host/digest.sh@69 -- # bperf_py perform_tests 00:48:07.064 13:10:26 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:48:07.323 I/O size of 131072 is greater than zero copy threshold (65536). 00:48:07.323 Zero copy mechanism will not be used. 00:48:07.323 Running I/O for 2 seconds... 00:48:07.323 [2024-07-22 13:10:26.570254] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.323 [2024-07-22 13:10:26.570576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.323 [2024-07-22 13:10:26.570648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:07.323 [2024-07-22 13:10:26.574825] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.323 [2024-07-22 13:10:26.574997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.323 [2024-07-22 13:10:26.575019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:07.323 [2024-07-22 13:10:26.579238] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.323 [2024-07-22 13:10:26.579369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.323 [2024-07-22 13:10:26.579391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:07.323 [2024-07-22 13:10:26.583430] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.323 [2024-07-22 13:10:26.583563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.323 [2024-07-22 13:10:26.583583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:07.323 [2024-07-22 13:10:26.587579] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.323 [2024-07-22 13:10:26.587699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.323 [2024-07-22 13:10:26.587722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:07.323 [2024-07-22 13:10:26.591752] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.323 [2024-07-22 13:10:26.591873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.323 [2024-07-22 13:10:26.591894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:07.323 [2024-07-22 13:10:26.596090] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.323 [2024-07-22 13:10:26.596245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.323 [2024-07-22 13:10:26.596266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:07.323 [2024-07-22 13:10:26.600387] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.323 [2024-07-22 13:10:26.600615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.323 [2024-07-22 13:10:26.600636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:07.323 [2024-07-22 13:10:26.604493] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.324 [2024-07-22 13:10:26.604699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.324 [2024-07-22 13:10:26.604720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:07.324 [2024-07-22 13:10:26.608675] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.324 [2024-07-22 13:10:26.608817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.324 [2024-07-22 13:10:26.608838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:07.324 [2024-07-22 13:10:26.612740] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.324 [2024-07-22 13:10:26.612862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.324 [2024-07-22 13:10:26.612883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:07.324 [2024-07-22 13:10:26.616998] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.324 [2024-07-22 13:10:26.617141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.324 [2024-07-22 13:10:26.617162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:07.324 [2024-07-22 13:10:26.621185] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.324 [2024-07-22 13:10:26.621312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.324 [2024-07-22 13:10:26.621333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:07.324 [2024-07-22 13:10:26.625401] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.324 [2024-07-22 13:10:26.625544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.324 [2024-07-22 13:10:26.625563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:07.324 [2024-07-22 13:10:26.629583] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.324 [2024-07-22 13:10:26.629729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.324 [2024-07-22 13:10:26.629750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:07.324 [2024-07-22 13:10:26.633804] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.324 [2024-07-22 13:10:26.634024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.324 [2024-07-22 13:10:26.634044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:07.324 [2024-07-22 13:10:26.638013] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.324 [2024-07-22 13:10:26.638202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.324 [2024-07-22 13:10:26.638235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:07.324 [2024-07-22 13:10:26.642210] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.324 [2024-07-22 13:10:26.642371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.324 [2024-07-22 13:10:26.642392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:07.324 [2024-07-22 13:10:26.646319] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.324 [2024-07-22 13:10:26.646442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.324 [2024-07-22 13:10:26.646463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:07.324 [2024-07-22 13:10:26.650463] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.324 [2024-07-22 13:10:26.650594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.324 [2024-07-22 13:10:26.650647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:07.324 [2024-07-22 13:10:26.654569] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.324 [2024-07-22 13:10:26.654712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.324 [2024-07-22 13:10:26.654734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:07.324 [2024-07-22 13:10:26.658678] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.324 [2024-07-22 13:10:26.658819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.324 [2024-07-22 13:10:26.658841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:07.324 [2024-07-22 13:10:26.662805] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.324 [2024-07-22 13:10:26.662940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.324 [2024-07-22 13:10:26.662961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:07.324 [2024-07-22 13:10:26.667142] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.324 [2024-07-22 13:10:26.667379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.324 [2024-07-22 13:10:26.667400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:07.324 [2024-07-22 13:10:26.671286] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.324 [2024-07-22 13:10:26.671537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.324 [2024-07-22 13:10:26.671617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:07.324 [2024-07-22 13:10:26.675390] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.324 [2024-07-22 13:10:26.675537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.324 [2024-07-22 13:10:26.675557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:07.324 [2024-07-22 13:10:26.679461] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.324 [2024-07-22 13:10:26.679592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.324 [2024-07-22 13:10:26.679612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:07.324 [2024-07-22 13:10:26.683524] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.324 [2024-07-22 13:10:26.683653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.324 [2024-07-22 13:10:26.683673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:07.324 [2024-07-22 13:10:26.687652] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.324 [2024-07-22 13:10:26.687774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.324 [2024-07-22 13:10:26.687795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:07.324 [2024-07-22 13:10:26.691777] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.324 [2024-07-22 13:10:26.691917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.324 [2024-07-22 13:10:26.691937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:07.324 [2024-07-22 13:10:26.696009] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.324 [2024-07-22 13:10:26.696154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.324 [2024-07-22 13:10:26.696175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:07.324 [2024-07-22 13:10:26.700230] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.324 [2024-07-22 13:10:26.700459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.324 [2024-07-22 13:10:26.700479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:07.324 [2024-07-22 13:10:26.704405] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.324 [2024-07-22 13:10:26.704642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.324 [2024-07-22 13:10:26.704662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:07.324 [2024-07-22 13:10:26.708558] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.324 [2024-07-22 13:10:26.708722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.324 [2024-07-22 13:10:26.708742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:07.324 [2024-07-22 13:10:26.712720] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.324 [2024-07-22 13:10:26.712836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.324 [2024-07-22 13:10:26.712856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:07.324 [2024-07-22 13:10:26.716828] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.324 [2024-07-22 13:10:26.716952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.324 [2024-07-22 13:10:26.716972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:07.324 [2024-07-22 13:10:26.721038] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.324 [2024-07-22 13:10:26.721160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.324 [2024-07-22 13:10:26.721194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:07.324 [2024-07-22 13:10:26.725203] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.324 [2024-07-22 13:10:26.725347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.325 [2024-07-22 13:10:26.725382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:07.325 [2024-07-22 13:10:26.729418] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.325 [2024-07-22 13:10:26.729561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.325 [2024-07-22 13:10:26.729582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:07.325 [2024-07-22 13:10:26.733589] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.325 [2024-07-22 13:10:26.733809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.325 [2024-07-22 13:10:26.733830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:07.325 [2024-07-22 13:10:26.738062] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.325 [2024-07-22 13:10:26.738263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.325 [2024-07-22 13:10:26.738302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:07.325 [2024-07-22 13:10:26.742468] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.325 [2024-07-22 13:10:26.742657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.325 [2024-07-22 13:10:26.742697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:07.585 [2024-07-22 13:10:26.746668] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.585 [2024-07-22 13:10:26.746759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.585 [2024-07-22 13:10:26.746784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:07.585 [2024-07-22 13:10:26.750920] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.585 [2024-07-22 13:10:26.751082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.585 [2024-07-22 13:10:26.751106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:07.585 [2024-07-22 13:10:26.755180] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.585 [2024-07-22 13:10:26.755304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.585 [2024-07-22 13:10:26.755325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:07.585 [2024-07-22 13:10:26.759341] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.585 [2024-07-22 13:10:26.759495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.585 [2024-07-22 13:10:26.759517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:07.585 [2024-07-22 13:10:26.763518] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.585 [2024-07-22 13:10:26.763663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.585 [2024-07-22 13:10:26.763684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:07.585 [2024-07-22 13:10:26.767713] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.585 [2024-07-22 13:10:26.767933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.585 [2024-07-22 13:10:26.767953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:07.585 [2024-07-22 13:10:26.771817] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.585 [2024-07-22 13:10:26.772017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.585 [2024-07-22 13:10:26.772037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:07.585 [2024-07-22 13:10:26.776077] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.585 [2024-07-22 13:10:26.776252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.585 [2024-07-22 13:10:26.776274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:07.585 [2024-07-22 13:10:26.780288] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.585 [2024-07-22 13:10:26.780413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.585 [2024-07-22 13:10:26.780434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:07.585 [2024-07-22 13:10:26.784396] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.585 [2024-07-22 13:10:26.784507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.585 [2024-07-22 13:10:26.784528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:07.585 [2024-07-22 13:10:26.788533] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.585 [2024-07-22 13:10:26.788644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.585 [2024-07-22 13:10:26.788664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:07.585 [2024-07-22 13:10:26.792638] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.585 [2024-07-22 13:10:26.792789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.585 [2024-07-22 13:10:26.792809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:07.585 [2024-07-22 13:10:26.796704] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.585 [2024-07-22 13:10:26.796849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.585 [2024-07-22 13:10:26.796869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:07.585 [2024-07-22 13:10:26.801056] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.585 [2024-07-22 13:10:26.801293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.585 [2024-07-22 13:10:26.801320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:07.586 [2024-07-22 13:10:26.805143] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.586 [2024-07-22 13:10:26.805402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.586 [2024-07-22 13:10:26.805482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:07.586 [2024-07-22 13:10:26.809350] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.586 [2024-07-22 13:10:26.809526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.586 [2024-07-22 13:10:26.809546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:07.586 [2024-07-22 13:10:26.813371] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.586 [2024-07-22 13:10:26.813509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.586 [2024-07-22 13:10:26.813530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:07.586 [2024-07-22 13:10:26.817555] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.586 [2024-07-22 13:10:26.817682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.586 [2024-07-22 13:10:26.817703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:07.586 [2024-07-22 13:10:26.821647] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.586 [2024-07-22 13:10:26.821763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.586 [2024-07-22 13:10:26.821783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:07.586 [2024-07-22 13:10:26.825792] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.586 [2024-07-22 13:10:26.825939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.586 [2024-07-22 13:10:26.825959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:07.586 [2024-07-22 13:10:26.830014] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.586 [2024-07-22 13:10:26.830161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.586 [2024-07-22 13:10:26.830182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:07.586 [2024-07-22 13:10:26.834123] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.586 [2024-07-22 13:10:26.834361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.586 [2024-07-22 13:10:26.834404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:07.586 [2024-07-22 13:10:26.838274] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.586 [2024-07-22 13:10:26.838485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.586 [2024-07-22 13:10:26.838506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:07.586 [2024-07-22 13:10:26.842359] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.586 [2024-07-22 13:10:26.842531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.586 [2024-07-22 13:10:26.842551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:07.586 [2024-07-22 13:10:26.846423] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.586 [2024-07-22 13:10:26.846537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.586 [2024-07-22 13:10:26.846557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:07.586 [2024-07-22 13:10:26.850644] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.586 [2024-07-22 13:10:26.850750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.586 [2024-07-22 13:10:26.850772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:07.586 [2024-07-22 13:10:26.854714] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.586 [2024-07-22 13:10:26.854843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.586 [2024-07-22 13:10:26.854866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:07.586 [2024-07-22 13:10:26.859026] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.586 [2024-07-22 13:10:26.859186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.586 [2024-07-22 13:10:26.859207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:07.586 [2024-07-22 13:10:26.863324] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.586 [2024-07-22 13:10:26.863466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.586 [2024-07-22 13:10:26.863486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:07.586 [2024-07-22 13:10:26.867625] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.586 [2024-07-22 13:10:26.867844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.586 [2024-07-22 13:10:26.867864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:07.586 [2024-07-22 13:10:26.871749] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.586 [2024-07-22 13:10:26.871970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.586 [2024-07-22 13:10:26.871991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:07.586 [2024-07-22 13:10:26.875979] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.586 [2024-07-22 13:10:26.876128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.586 [2024-07-22 13:10:26.876149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:07.586 [2024-07-22 13:10:26.880216] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.586 [2024-07-22 13:10:26.880337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.586 [2024-07-22 13:10:26.880360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:07.586 [2024-07-22 13:10:26.884317] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.586 [2024-07-22 13:10:26.884430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.586 [2024-07-22 13:10:26.884451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:07.586 [2024-07-22 13:10:26.888428] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.586 [2024-07-22 13:10:26.888539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.586 [2024-07-22 13:10:26.888559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:07.586 [2024-07-22 13:10:26.892543] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.586 [2024-07-22 13:10:26.892680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.586 [2024-07-22 13:10:26.892700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:07.586 [2024-07-22 13:10:26.896674] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.586 [2024-07-22 13:10:26.896815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.586 [2024-07-22 13:10:26.896836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:07.586 [2024-07-22 13:10:26.900852] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.586 [2024-07-22 13:10:26.901077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.586 [2024-07-22 13:10:26.901096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:07.586 [2024-07-22 13:10:26.905023] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.587 [2024-07-22 13:10:26.905278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.587 [2024-07-22 13:10:26.905317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:07.587 [2024-07-22 13:10:26.909299] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.587 [2024-07-22 13:10:26.909456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.587 [2024-07-22 13:10:26.909476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:07.587 [2024-07-22 13:10:26.913417] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.587 [2024-07-22 13:10:26.913540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.587 [2024-07-22 13:10:26.913561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:07.587 [2024-07-22 13:10:26.917465] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.587 [2024-07-22 13:10:26.917596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.587 [2024-07-22 13:10:26.917617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:07.587 [2024-07-22 13:10:26.921594] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.587 [2024-07-22 13:10:26.921706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.587 [2024-07-22 13:10:26.921726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:07.587 [2024-07-22 13:10:26.925691] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.587 [2024-07-22 13:10:26.925830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.587 [2024-07-22 13:10:26.925850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:07.587 [2024-07-22 13:10:26.929891] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.587 [2024-07-22 13:10:26.930033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.587 [2024-07-22 13:10:26.930053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:07.587 [2024-07-22 13:10:26.934055] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.587 [2024-07-22 13:10:26.934287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.587 [2024-07-22 13:10:26.934317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:07.587 [2024-07-22 13:10:26.938118] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.587 [2024-07-22 13:10:26.938364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.587 [2024-07-22 13:10:26.938417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:07.587 [2024-07-22 13:10:26.942211] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.587 [2024-07-22 13:10:26.942362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.587 [2024-07-22 13:10:26.942383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:07.587 [2024-07-22 13:10:26.946214] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.587 [2024-07-22 13:10:26.946327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.587 [2024-07-22 13:10:26.946348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:07.587 [2024-07-22 13:10:26.950164] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.587 [2024-07-22 13:10:26.950296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.587 [2024-07-22 13:10:26.950316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:07.587 [2024-07-22 13:10:26.954163] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.587 [2024-07-22 13:10:26.954301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.587 [2024-07-22 13:10:26.954321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:07.587 [2024-07-22 13:10:26.958296] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.587 [2024-07-22 13:10:26.958440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.587 [2024-07-22 13:10:26.958462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:07.587 [2024-07-22 13:10:26.962330] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.587 [2024-07-22 13:10:26.962487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.587 [2024-07-22 13:10:26.962508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:07.587 [2024-07-22 13:10:26.966456] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.587 [2024-07-22 13:10:26.966731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.587 [2024-07-22 13:10:26.966759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:07.587 [2024-07-22 13:10:26.970569] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.587 [2024-07-22 13:10:26.970823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.587 [2024-07-22 13:10:26.970844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:07.587 [2024-07-22 13:10:26.974594] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.587 [2024-07-22 13:10:26.974783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.587 [2024-07-22 13:10:26.974804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:07.587 [2024-07-22 13:10:26.978721] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.587 [2024-07-22 13:10:26.978850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.587 [2024-07-22 13:10:26.978871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:07.587 [2024-07-22 13:10:26.982700] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.587 [2024-07-22 13:10:26.982817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.587 [2024-07-22 13:10:26.982838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:07.587 [2024-07-22 13:10:26.986782] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.587 [2024-07-22 13:10:26.986897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.587 [2024-07-22 13:10:26.986918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:07.587 [2024-07-22 13:10:26.990889] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.587 [2024-07-22 13:10:26.991027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.587 [2024-07-22 13:10:26.991062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:07.587 [2024-07-22 13:10:26.995095] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.587 [2024-07-22 13:10:26.995263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.587 [2024-07-22 13:10:26.995284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:07.587 [2024-07-22 13:10:26.999322] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.587 [2024-07-22 13:10:26.999541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.587 [2024-07-22 13:10:26.999561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:07.587 [2024-07-22 13:10:27.003750] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.587 [2024-07-22 13:10:27.003968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.587 [2024-07-22 13:10:27.003991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:07.848 [2024-07-22 13:10:27.008149] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.848 [2024-07-22 13:10:27.008330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.848 [2024-07-22 13:10:27.008353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:07.848 [2024-07-22 13:10:27.012492] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.848 [2024-07-22 13:10:27.012594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.848 [2024-07-22 13:10:27.012617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:07.848 [2024-07-22 13:10:27.016619] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.848 [2024-07-22 13:10:27.016788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.848 [2024-07-22 13:10:27.016809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:07.848 [2024-07-22 13:10:27.020706] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.848 [2024-07-22 13:10:27.020836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.848 [2024-07-22 13:10:27.020857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:07.848 [2024-07-22 13:10:27.024833] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.848 [2024-07-22 13:10:27.024971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.848 [2024-07-22 13:10:27.024992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:07.848 [2024-07-22 13:10:27.029088] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.848 [2024-07-22 13:10:27.029242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.848 [2024-07-22 13:10:27.029263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:07.849 [2024-07-22 13:10:27.033271] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.849 [2024-07-22 13:10:27.033493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.849 [2024-07-22 13:10:27.033515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:07.849 [2024-07-22 13:10:27.037358] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.849 [2024-07-22 13:10:27.037602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.849 [2024-07-22 13:10:27.037644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:07.849 [2024-07-22 13:10:27.041470] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.849 [2024-07-22 13:10:27.041613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.849 [2024-07-22 13:10:27.041634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:07.849 [2024-07-22 13:10:27.045987] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.849 [2024-07-22 13:10:27.046123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.849 [2024-07-22 13:10:27.046147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:07.849 [2024-07-22 13:10:27.050436] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.849 [2024-07-22 13:10:27.050547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.849 [2024-07-22 13:10:27.050570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:07.849 [2024-07-22 13:10:27.054370] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.849 [2024-07-22 13:10:27.054473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.849 [2024-07-22 13:10:27.054495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:07.849 [2024-07-22 13:10:27.058529] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.849 [2024-07-22 13:10:27.058704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.849 [2024-07-22 13:10:27.058726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:07.849 [2024-07-22 13:10:27.062572] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.849 [2024-07-22 13:10:27.062753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.849 [2024-07-22 13:10:27.062775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:07.849 [2024-07-22 13:10:27.066815] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.849 [2024-07-22 13:10:27.067062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.849 [2024-07-22 13:10:27.067111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:07.849 [2024-07-22 13:10:27.071101] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.849 [2024-07-22 13:10:27.071328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.849 [2024-07-22 13:10:27.071350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:07.849 [2024-07-22 13:10:27.075310] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.849 [2024-07-22 13:10:27.075470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.849 [2024-07-22 13:10:27.075490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:07.849 [2024-07-22 13:10:27.079402] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.849 [2024-07-22 13:10:27.079525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.849 [2024-07-22 13:10:27.079546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:07.849 [2024-07-22 13:10:27.083466] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.849 [2024-07-22 13:10:27.083577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.849 [2024-07-22 13:10:27.083598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:07.849 [2024-07-22 13:10:27.087516] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.849 [2024-07-22 13:10:27.087658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.849 [2024-07-22 13:10:27.087678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:07.849 [2024-07-22 13:10:27.091596] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.849 [2024-07-22 13:10:27.091735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.849 [2024-07-22 13:10:27.091755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:07.849 [2024-07-22 13:10:27.095709] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.849 [2024-07-22 13:10:27.095855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.849 [2024-07-22 13:10:27.095875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:07.849 [2024-07-22 13:10:27.099963] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.849 [2024-07-22 13:10:27.100198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.849 [2024-07-22 13:10:27.100219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:07.849 [2024-07-22 13:10:27.104252] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.849 [2024-07-22 13:10:27.104544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.849 [2024-07-22 13:10:27.104618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:07.849 [2024-07-22 13:10:27.108889] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.849 [2024-07-22 13:10:27.109107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.849 [2024-07-22 13:10:27.109130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:07.849 [2024-07-22 13:10:27.113065] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.849 [2024-07-22 13:10:27.113217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.849 [2024-07-22 13:10:27.113240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:07.849 [2024-07-22 13:10:27.117310] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.849 [2024-07-22 13:10:27.117427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.849 [2024-07-22 13:10:27.117449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:07.849 [2024-07-22 13:10:27.121448] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.849 [2024-07-22 13:10:27.121581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.849 [2024-07-22 13:10:27.121603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:07.849 [2024-07-22 13:10:27.125553] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.849 [2024-07-22 13:10:27.125695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.849 [2024-07-22 13:10:27.125716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:07.849 [2024-07-22 13:10:27.129736] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.849 [2024-07-22 13:10:27.129880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.849 [2024-07-22 13:10:27.129902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:07.849 [2024-07-22 13:10:27.134038] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.849 [2024-07-22 13:10:27.134281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.849 [2024-07-22 13:10:27.134305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:07.849 [2024-07-22 13:10:27.138223] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.849 [2024-07-22 13:10:27.138447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.849 [2024-07-22 13:10:27.138468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:07.849 [2024-07-22 13:10:27.142304] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.849 [2024-07-22 13:10:27.142448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.849 [2024-07-22 13:10:27.142471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:07.850 [2024-07-22 13:10:27.146503] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.850 [2024-07-22 13:10:27.146670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.850 [2024-07-22 13:10:27.146694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:07.850 [2024-07-22 13:10:27.150571] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.850 [2024-07-22 13:10:27.150720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.850 [2024-07-22 13:10:27.150742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:07.850 [2024-07-22 13:10:27.154559] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.850 [2024-07-22 13:10:27.154708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.850 [2024-07-22 13:10:27.154731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:07.850 [2024-07-22 13:10:27.158789] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.850 [2024-07-22 13:10:27.158911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.850 [2024-07-22 13:10:27.158948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:07.850 [2024-07-22 13:10:27.163081] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.850 [2024-07-22 13:10:27.163247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.850 [2024-07-22 13:10:27.163269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:07.850 [2024-07-22 13:10:27.167340] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.850 [2024-07-22 13:10:27.167571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.850 [2024-07-22 13:10:27.167600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:07.850 [2024-07-22 13:10:27.171590] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.850 [2024-07-22 13:10:27.171822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.850 [2024-07-22 13:10:27.171844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:07.850 [2024-07-22 13:10:27.175962] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.850 [2024-07-22 13:10:27.176127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.850 [2024-07-22 13:10:27.176165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:07.850 [2024-07-22 13:10:27.180532] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.850 [2024-07-22 13:10:27.180687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.850 [2024-07-22 13:10:27.180709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:07.850 [2024-07-22 13:10:27.185133] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.850 [2024-07-22 13:10:27.185294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.850 [2024-07-22 13:10:27.185317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:07.850 [2024-07-22 13:10:27.189751] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.850 [2024-07-22 13:10:27.189868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.850 [2024-07-22 13:10:27.189889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:07.850 [2024-07-22 13:10:27.194315] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.850 [2024-07-22 13:10:27.194459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.850 [2024-07-22 13:10:27.194482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:07.850 [2024-07-22 13:10:27.198860] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.850 [2024-07-22 13:10:27.199011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.850 [2024-07-22 13:10:27.199033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:07.850 [2024-07-22 13:10:27.203500] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.850 [2024-07-22 13:10:27.203754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.850 [2024-07-22 13:10:27.203777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:07.850 [2024-07-22 13:10:27.207890] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.850 [2024-07-22 13:10:27.208143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.850 [2024-07-22 13:10:27.208196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:07.850 [2024-07-22 13:10:27.212477] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.850 [2024-07-22 13:10:27.212659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.850 [2024-07-22 13:10:27.212681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:07.850 [2024-07-22 13:10:27.216706] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.850 [2024-07-22 13:10:27.216847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.850 [2024-07-22 13:10:27.216868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:07.850 [2024-07-22 13:10:27.221176] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.850 [2024-07-22 13:10:27.221350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.850 [2024-07-22 13:10:27.221372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:07.850 [2024-07-22 13:10:27.225389] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.850 [2024-07-22 13:10:27.225510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.850 [2024-07-22 13:10:27.225533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:07.850 [2024-07-22 13:10:27.229607] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.850 [2024-07-22 13:10:27.229751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.850 [2024-07-22 13:10:27.229772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:07.850 [2024-07-22 13:10:27.233833] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.850 [2024-07-22 13:10:27.233981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.850 [2024-07-22 13:10:27.234002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:07.850 [2024-07-22 13:10:27.238219] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.850 [2024-07-22 13:10:27.238472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.850 [2024-07-22 13:10:27.238495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:07.850 [2024-07-22 13:10:27.242475] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.850 [2024-07-22 13:10:27.242739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.850 [2024-07-22 13:10:27.242769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:07.850 [2024-07-22 13:10:27.246877] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.850 [2024-07-22 13:10:27.247009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.850 [2024-07-22 13:10:27.247031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:07.850 [2024-07-22 13:10:27.251378] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.850 [2024-07-22 13:10:27.251474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.850 [2024-07-22 13:10:27.251497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:07.850 [2024-07-22 13:10:27.255829] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.850 [2024-07-22 13:10:27.255946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.850 [2024-07-22 13:10:27.255968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:07.850 [2024-07-22 13:10:27.260564] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.850 [2024-07-22 13:10:27.260655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.850 [2024-07-22 13:10:27.260677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:07.851 [2024-07-22 13:10:27.265026] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:07.851 [2024-07-22 13:10:27.265194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:07.851 [2024-07-22 13:10:27.265219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:08.111 [2024-07-22 13:10:27.269733] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.111 [2024-07-22 13:10:27.269899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.111 [2024-07-22 13:10:27.269939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:08.111 [2024-07-22 13:10:27.274429] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.111 [2024-07-22 13:10:27.274662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.111 [2024-07-22 13:10:27.274686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:08.111 [2024-07-22 13:10:27.278903] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.111 [2024-07-22 13:10:27.279187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.111 [2024-07-22 13:10:27.279212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:08.111 [2024-07-22 13:10:27.283419] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.111 [2024-07-22 13:10:27.283574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.111 [2024-07-22 13:10:27.283596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:08.111 [2024-07-22 13:10:27.287746] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.111 [2024-07-22 13:10:27.287864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.111 [2024-07-22 13:10:27.287886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:08.111 [2024-07-22 13:10:27.292012] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.111 [2024-07-22 13:10:27.292137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.111 [2024-07-22 13:10:27.292159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:08.111 [2024-07-22 13:10:27.296508] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.111 [2024-07-22 13:10:27.296623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.111 [2024-07-22 13:10:27.296644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:08.111 [2024-07-22 13:10:27.300917] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.111 [2024-07-22 13:10:27.301094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.111 [2024-07-22 13:10:27.301121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:08.111 [2024-07-22 13:10:27.305524] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.111 [2024-07-22 13:10:27.305681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.111 [2024-07-22 13:10:27.305705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:08.111 [2024-07-22 13:10:27.309787] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.111 [2024-07-22 13:10:27.310020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.111 [2024-07-22 13:10:27.310049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:08.111 [2024-07-22 13:10:27.314199] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.111 [2024-07-22 13:10:27.314466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.111 [2024-07-22 13:10:27.314491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:08.111 [2024-07-22 13:10:27.318460] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.111 [2024-07-22 13:10:27.318642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.111 [2024-07-22 13:10:27.318665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:08.111 [2024-07-22 13:10:27.322687] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.111 [2024-07-22 13:10:27.322806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.111 [2024-07-22 13:10:27.322829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:08.111 [2024-07-22 13:10:27.326788] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.111 [2024-07-22 13:10:27.326899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.111 [2024-07-22 13:10:27.326921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:08.111 [2024-07-22 13:10:27.330939] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.111 [2024-07-22 13:10:27.331100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.111 [2024-07-22 13:10:27.331137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:08.111 [2024-07-22 13:10:27.335381] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.111 [2024-07-22 13:10:27.335528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.111 [2024-07-22 13:10:27.335550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:08.111 [2024-07-22 13:10:27.339663] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.111 [2024-07-22 13:10:27.339809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.111 [2024-07-22 13:10:27.339831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:08.111 [2024-07-22 13:10:27.344025] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.111 [2024-07-22 13:10:27.344266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.111 [2024-07-22 13:10:27.344290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:08.111 [2024-07-22 13:10:27.348469] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.111 [2024-07-22 13:10:27.348708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.111 [2024-07-22 13:10:27.348736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:08.111 [2024-07-22 13:10:27.352848] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.111 [2024-07-22 13:10:27.353004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.111 [2024-07-22 13:10:27.353027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:08.111 [2024-07-22 13:10:27.357166] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.111 [2024-07-22 13:10:27.357299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.111 [2024-07-22 13:10:27.357321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:08.111 [2024-07-22 13:10:27.361708] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.111 [2024-07-22 13:10:27.361835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.112 [2024-07-22 13:10:27.361859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:08.112 [2024-07-22 13:10:27.366179] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.112 [2024-07-22 13:10:27.366312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.112 [2024-07-22 13:10:27.366358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:08.112 [2024-07-22 13:10:27.370464] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.112 [2024-07-22 13:10:27.370600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.112 [2024-07-22 13:10:27.370649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:08.112 [2024-07-22 13:10:27.374675] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.112 [2024-07-22 13:10:27.374802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.112 [2024-07-22 13:10:27.374826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:08.112 [2024-07-22 13:10:27.379110] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.112 [2024-07-22 13:10:27.379467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.112 [2024-07-22 13:10:27.379508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:08.112 [2024-07-22 13:10:27.383533] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.112 [2024-07-22 13:10:27.383754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.112 [2024-07-22 13:10:27.383775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:08.112 [2024-07-22 13:10:27.387780] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.112 [2024-07-22 13:10:27.387965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.112 [2024-07-22 13:10:27.387987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:08.112 [2024-07-22 13:10:27.392299] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.112 [2024-07-22 13:10:27.392415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.112 [2024-07-22 13:10:27.392437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:08.112 [2024-07-22 13:10:27.396424] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.112 [2024-07-22 13:10:27.396557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.112 [2024-07-22 13:10:27.396578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:08.112 [2024-07-22 13:10:27.400533] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.112 [2024-07-22 13:10:27.400648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.112 [2024-07-22 13:10:27.400668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:08.112 [2024-07-22 13:10:27.404739] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.112 [2024-07-22 13:10:27.404917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.112 [2024-07-22 13:10:27.404937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:08.112 [2024-07-22 13:10:27.408933] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.112 [2024-07-22 13:10:27.409080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.112 [2024-07-22 13:10:27.409101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:08.112 [2024-07-22 13:10:27.413426] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.112 [2024-07-22 13:10:27.413658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.112 [2024-07-22 13:10:27.413680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:08.112 [2024-07-22 13:10:27.417666] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.112 [2024-07-22 13:10:27.417894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.112 [2024-07-22 13:10:27.417916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:08.112 [2024-07-22 13:10:27.421817] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.112 [2024-07-22 13:10:27.422013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.112 [2024-07-22 13:10:27.422035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:08.112 [2024-07-22 13:10:27.425971] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.112 [2024-07-22 13:10:27.426104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.112 [2024-07-22 13:10:27.426125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:08.112 [2024-07-22 13:10:27.430001] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.112 [2024-07-22 13:10:27.430112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.112 [2024-07-22 13:10:27.430133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:08.112 [2024-07-22 13:10:27.434125] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.112 [2024-07-22 13:10:27.434252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.112 [2024-07-22 13:10:27.434272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:08.112 [2024-07-22 13:10:27.438239] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.112 [2024-07-22 13:10:27.438410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.112 [2024-07-22 13:10:27.438430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:08.112 [2024-07-22 13:10:27.442356] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.112 [2024-07-22 13:10:27.442581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.112 [2024-07-22 13:10:27.442610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:08.112 [2024-07-22 13:10:27.446529] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.112 [2024-07-22 13:10:27.446794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.112 [2024-07-22 13:10:27.446819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:08.112 [2024-07-22 13:10:27.450665] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.112 [2024-07-22 13:10:27.450864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.112 [2024-07-22 13:10:27.450886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:08.112 [2024-07-22 13:10:27.454861] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.112 [2024-07-22 13:10:27.455037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.112 [2024-07-22 13:10:27.455073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:08.112 [2024-07-22 13:10:27.459030] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.112 [2024-07-22 13:10:27.459161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.112 [2024-07-22 13:10:27.459183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:08.112 [2024-07-22 13:10:27.463230] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.112 [2024-07-22 13:10:27.463363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.112 [2024-07-22 13:10:27.463384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:08.112 [2024-07-22 13:10:27.467346] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.112 [2024-07-22 13:10:27.467460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.112 [2024-07-22 13:10:27.467481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:08.112 [2024-07-22 13:10:27.471410] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.112 [2024-07-22 13:10:27.471589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.112 [2024-07-22 13:10:27.471609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:08.112 [2024-07-22 13:10:27.475654] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.112 [2024-07-22 13:10:27.475789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.113 [2024-07-22 13:10:27.475810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:08.113 [2024-07-22 13:10:27.479815] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.113 [2024-07-22 13:10:27.480045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.113 [2024-07-22 13:10:27.480075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:08.113 [2024-07-22 13:10:27.484056] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.113 [2024-07-22 13:10:27.484291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.113 [2024-07-22 13:10:27.484318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:08.113 [2024-07-22 13:10:27.488245] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.113 [2024-07-22 13:10:27.488415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.113 [2024-07-22 13:10:27.488435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:08.113 [2024-07-22 13:10:27.492333] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.113 [2024-07-22 13:10:27.492451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.113 [2024-07-22 13:10:27.492472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:08.113 [2024-07-22 13:10:27.496399] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.113 [2024-07-22 13:10:27.496522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.113 [2024-07-22 13:10:27.496542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:08.113 [2024-07-22 13:10:27.500430] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.113 [2024-07-22 13:10:27.500550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.113 [2024-07-22 13:10:27.500572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:08.113 [2024-07-22 13:10:27.504668] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.113 [2024-07-22 13:10:27.504836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.113 [2024-07-22 13:10:27.504857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:08.113 [2024-07-22 13:10:27.508747] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.113 [2024-07-22 13:10:27.508945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.113 [2024-07-22 13:10:27.508965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:08.113 [2024-07-22 13:10:27.513096] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.113 [2024-07-22 13:10:27.513340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.113 [2024-07-22 13:10:27.513363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:08.113 [2024-07-22 13:10:27.517258] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.113 [2024-07-22 13:10:27.517472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.113 [2024-07-22 13:10:27.517492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:08.113 [2024-07-22 13:10:27.521399] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.113 [2024-07-22 13:10:27.521593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.113 [2024-07-22 13:10:27.521614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:08.113 [2024-07-22 13:10:27.525577] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.113 [2024-07-22 13:10:27.525699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.113 [2024-07-22 13:10:27.525720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:08.113 [2024-07-22 13:10:27.530195] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.113 [2024-07-22 13:10:27.530292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.113 [2024-07-22 13:10:27.530317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:08.373 [2024-07-22 13:10:27.534868] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.373 [2024-07-22 13:10:27.534961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.373 [2024-07-22 13:10:27.534986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:08.373 [2024-07-22 13:10:27.539851] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.373 [2024-07-22 13:10:27.540051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.373 [2024-07-22 13:10:27.540075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:08.373 [2024-07-22 13:10:27.544015] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.373 [2024-07-22 13:10:27.544221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.373 [2024-07-22 13:10:27.544250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:08.373 [2024-07-22 13:10:27.548433] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.373 [2024-07-22 13:10:27.548673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.373 [2024-07-22 13:10:27.548702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:08.373 [2024-07-22 13:10:27.552733] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.373 [2024-07-22 13:10:27.552913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.373 [2024-07-22 13:10:27.552939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:08.373 [2024-07-22 13:10:27.556858] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.373 [2024-07-22 13:10:27.557037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.373 [2024-07-22 13:10:27.557059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:08.373 [2024-07-22 13:10:27.561268] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.373 [2024-07-22 13:10:27.561408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.373 [2024-07-22 13:10:27.561432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:08.373 [2024-07-22 13:10:27.565784] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.373 [2024-07-22 13:10:27.565916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.373 [2024-07-22 13:10:27.565940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:08.373 [2024-07-22 13:10:27.569967] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.373 [2024-07-22 13:10:27.570078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.373 [2024-07-22 13:10:27.570099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:08.373 [2024-07-22 13:10:27.574117] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.373 [2024-07-22 13:10:27.574284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.373 [2024-07-22 13:10:27.574305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:08.373 [2024-07-22 13:10:27.578524] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.373 [2024-07-22 13:10:27.578803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.373 [2024-07-22 13:10:27.579049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:08.373 [2024-07-22 13:10:27.582971] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.373 [2024-07-22 13:10:27.583246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.373 [2024-07-22 13:10:27.583524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:08.373 [2024-07-22 13:10:27.587435] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.373 [2024-07-22 13:10:27.587628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.373 [2024-07-22 13:10:27.587650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:08.373 [2024-07-22 13:10:27.591574] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.373 [2024-07-22 13:10:27.591748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.373 [2024-07-22 13:10:27.591770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:08.373 [2024-07-22 13:10:27.595849] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.373 [2024-07-22 13:10:27.596014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.373 [2024-07-22 13:10:27.596034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:08.373 [2024-07-22 13:10:27.599823] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.373 [2024-07-22 13:10:27.599929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.373 [2024-07-22 13:10:27.599949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:08.373 [2024-07-22 13:10:27.603932] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.373 [2024-07-22 13:10:27.604041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.373 [2024-07-22 13:10:27.604061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:08.373 [2024-07-22 13:10:27.607932] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.373 [2024-07-22 13:10:27.608053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.373 [2024-07-22 13:10:27.608073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:08.373 [2024-07-22 13:10:27.612099] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.373 [2024-07-22 13:10:27.612274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.373 [2024-07-22 13:10:27.612312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:08.373 [2024-07-22 13:10:27.616099] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.373 [2024-07-22 13:10:27.616237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.374 [2024-07-22 13:10:27.616258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:08.374 [2024-07-22 13:10:27.620330] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.374 [2024-07-22 13:10:27.620523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.374 [2024-07-22 13:10:27.620558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:08.374 [2024-07-22 13:10:27.624470] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.374 [2024-07-22 13:10:27.624638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.374 [2024-07-22 13:10:27.624658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:08.374 [2024-07-22 13:10:27.628603] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.374 [2024-07-22 13:10:27.628748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.374 [2024-07-22 13:10:27.628768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:08.374 [2024-07-22 13:10:27.632664] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.374 [2024-07-22 13:10:27.632755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.374 [2024-07-22 13:10:27.632775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:08.374 [2024-07-22 13:10:27.636658] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.374 [2024-07-22 13:10:27.636767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.374 [2024-07-22 13:10:27.636787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:08.374 [2024-07-22 13:10:27.640612] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.374 [2024-07-22 13:10:27.640732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.374 [2024-07-22 13:10:27.640752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:08.374 [2024-07-22 13:10:27.644713] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.374 [2024-07-22 13:10:27.644838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.374 [2024-07-22 13:10:27.644859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:08.374 [2024-07-22 13:10:27.648791] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.374 [2024-07-22 13:10:27.648885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.374 [2024-07-22 13:10:27.648906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:08.374 [2024-07-22 13:10:27.652889] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.374 [2024-07-22 13:10:27.653061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.374 [2024-07-22 13:10:27.653081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:08.374 [2024-07-22 13:10:27.656948] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.374 [2024-07-22 13:10:27.657117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.374 [2024-07-22 13:10:27.657137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:08.374 [2024-07-22 13:10:27.661026] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.374 [2024-07-22 13:10:27.661191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.374 [2024-07-22 13:10:27.661212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:08.374 [2024-07-22 13:10:27.665136] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.374 [2024-07-22 13:10:27.665267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.374 [2024-07-22 13:10:27.665288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:08.374 [2024-07-22 13:10:27.669435] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.374 [2024-07-22 13:10:27.669549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.374 [2024-07-22 13:10:27.669572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:08.374 [2024-07-22 13:10:27.673729] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.374 [2024-07-22 13:10:27.673858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.374 [2024-07-22 13:10:27.673881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:08.374 [2024-07-22 13:10:27.677884] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.374 [2024-07-22 13:10:27.678025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.374 [2024-07-22 13:10:27.678048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:08.374 [2024-07-22 13:10:27.682032] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.374 [2024-07-22 13:10:27.682129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.374 [2024-07-22 13:10:27.682162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:08.374 [2024-07-22 13:10:27.686128] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.374 [2024-07-22 13:10:27.686312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.374 [2024-07-22 13:10:27.686333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:08.374 [2024-07-22 13:10:27.690179] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.374 [2024-07-22 13:10:27.690363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.374 [2024-07-22 13:10:27.690384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:08.374 [2024-07-22 13:10:27.694348] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.374 [2024-07-22 13:10:27.694496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.374 [2024-07-22 13:10:27.694516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:08.374 [2024-07-22 13:10:27.698319] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.374 [2024-07-22 13:10:27.698421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.374 [2024-07-22 13:10:27.698441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:08.374 [2024-07-22 13:10:27.702323] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.374 [2024-07-22 13:10:27.702417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.374 [2024-07-22 13:10:27.702439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:08.374 [2024-07-22 13:10:27.706410] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.374 [2024-07-22 13:10:27.706531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.374 [2024-07-22 13:10:27.706552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:08.374 [2024-07-22 13:10:27.710544] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.374 [2024-07-22 13:10:27.710701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.374 [2024-07-22 13:10:27.710723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:08.374 [2024-07-22 13:10:27.714611] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.374 [2024-07-22 13:10:27.714727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.374 [2024-07-22 13:10:27.714748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:08.374 [2024-07-22 13:10:27.718700] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.374 [2024-07-22 13:10:27.718868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.374 [2024-07-22 13:10:27.718890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:08.374 [2024-07-22 13:10:27.722811] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.374 [2024-07-22 13:10:27.723025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.374 [2024-07-22 13:10:27.723051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:08.374 [2024-07-22 13:10:27.726950] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.374 [2024-07-22 13:10:27.727114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.375 [2024-07-22 13:10:27.727134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:08.375 [2024-07-22 13:10:27.730998] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.375 [2024-07-22 13:10:27.731137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.375 [2024-07-22 13:10:27.731157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:08.375 [2024-07-22 13:10:27.735042] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.375 [2024-07-22 13:10:27.735149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.375 [2024-07-22 13:10:27.735169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:08.375 [2024-07-22 13:10:27.739010] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.375 [2024-07-22 13:10:27.739149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.375 [2024-07-22 13:10:27.739169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:08.375 [2024-07-22 13:10:27.743144] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.375 [2024-07-22 13:10:27.743294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.375 [2024-07-22 13:10:27.743314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:08.375 [2024-07-22 13:10:27.747124] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.375 [2024-07-22 13:10:27.747281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.375 [2024-07-22 13:10:27.747301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:08.375 [2024-07-22 13:10:27.751207] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.375 [2024-07-22 13:10:27.751392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.375 [2024-07-22 13:10:27.751412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:08.375 [2024-07-22 13:10:27.755282] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.375 [2024-07-22 13:10:27.755439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.375 [2024-07-22 13:10:27.755460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:08.375 [2024-07-22 13:10:27.759279] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.375 [2024-07-22 13:10:27.759428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.375 [2024-07-22 13:10:27.759448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:08.375 [2024-07-22 13:10:27.763301] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.375 [2024-07-22 13:10:27.763406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.375 [2024-07-22 13:10:27.763427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:08.375 [2024-07-22 13:10:27.767290] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.375 [2024-07-22 13:10:27.767385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.375 [2024-07-22 13:10:27.767406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:08.375 [2024-07-22 13:10:27.771160] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.375 [2024-07-22 13:10:27.771311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.375 [2024-07-22 13:10:27.771332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:08.375 [2024-07-22 13:10:27.775151] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.375 [2024-07-22 13:10:27.775290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.375 [2024-07-22 13:10:27.775311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:08.375 [2024-07-22 13:10:27.779123] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.375 [2024-07-22 13:10:27.779228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.375 [2024-07-22 13:10:27.779260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:08.375 [2024-07-22 13:10:27.783311] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.375 [2024-07-22 13:10:27.783484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.375 [2024-07-22 13:10:27.783504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:08.375 [2024-07-22 13:10:27.787517] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.375 [2024-07-22 13:10:27.787702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.375 [2024-07-22 13:10:27.787722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:08.375 [2024-07-22 13:10:27.791974] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.375 [2024-07-22 13:10:27.792128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.375 [2024-07-22 13:10:27.792166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:08.635 [2024-07-22 13:10:27.796417] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.635 [2024-07-22 13:10:27.796557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.635 [2024-07-22 13:10:27.796580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:08.635 [2024-07-22 13:10:27.800677] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.635 [2024-07-22 13:10:27.800787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.635 [2024-07-22 13:10:27.800810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:08.635 [2024-07-22 13:10:27.804774] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.635 [2024-07-22 13:10:27.804917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.635 [2024-07-22 13:10:27.804939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:08.635 [2024-07-22 13:10:27.808844] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.635 [2024-07-22 13:10:27.808969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.635 [2024-07-22 13:10:27.808989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:08.635 [2024-07-22 13:10:27.812920] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.635 [2024-07-22 13:10:27.813027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.635 [2024-07-22 13:10:27.813048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:08.635 [2024-07-22 13:10:27.817083] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.635 [2024-07-22 13:10:27.817322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.635 [2024-07-22 13:10:27.817344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:08.635 [2024-07-22 13:10:27.821583] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.635 [2024-07-22 13:10:27.821780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.635 [2024-07-22 13:10:27.821804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:08.635 [2024-07-22 13:10:27.825928] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.635 [2024-07-22 13:10:27.826095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.635 [2024-07-22 13:10:27.826117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:08.635 [2024-07-22 13:10:27.829921] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.635 [2024-07-22 13:10:27.830043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.635 [2024-07-22 13:10:27.830065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:08.635 [2024-07-22 13:10:27.833992] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.635 [2024-07-22 13:10:27.834090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.635 [2024-07-22 13:10:27.834111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:08.635 [2024-07-22 13:10:27.838146] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.635 [2024-07-22 13:10:27.838285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.635 [2024-07-22 13:10:27.838306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:08.635 [2024-07-22 13:10:27.842045] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.635 [2024-07-22 13:10:27.842206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.635 [2024-07-22 13:10:27.842228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:08.635 [2024-07-22 13:10:27.846148] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.635 [2024-07-22 13:10:27.846249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.635 [2024-07-22 13:10:27.846271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:08.635 [2024-07-22 13:10:27.850706] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.635 [2024-07-22 13:10:27.850879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.635 [2024-07-22 13:10:27.850901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:08.635 [2024-07-22 13:10:27.854854] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.635 [2024-07-22 13:10:27.855046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.635 [2024-07-22 13:10:27.855067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:08.635 [2024-07-22 13:10:27.858819] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.635 [2024-07-22 13:10:27.859000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.635 [2024-07-22 13:10:27.859036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:08.635 [2024-07-22 13:10:27.862805] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.635 [2024-07-22 13:10:27.862931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.635 [2024-07-22 13:10:27.862966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:08.635 [2024-07-22 13:10:27.866856] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.635 [2024-07-22 13:10:27.866986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.635 [2024-07-22 13:10:27.867022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:08.635 [2024-07-22 13:10:27.870825] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.635 [2024-07-22 13:10:27.870980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.635 [2024-07-22 13:10:27.871017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:08.635 [2024-07-22 13:10:27.874822] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.635 [2024-07-22 13:10:27.874996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.635 [2024-07-22 13:10:27.875032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:08.635 [2024-07-22 13:10:27.878935] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.635 [2024-07-22 13:10:27.879073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.635 [2024-07-22 13:10:27.879093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:08.635 [2024-07-22 13:10:27.882973] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.635 [2024-07-22 13:10:27.883165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.635 [2024-07-22 13:10:27.883202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:08.635 [2024-07-22 13:10:27.887032] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.635 [2024-07-22 13:10:27.887239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.635 [2024-07-22 13:10:27.887272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:08.635 [2024-07-22 13:10:27.891222] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.635 [2024-07-22 13:10:27.891386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.635 [2024-07-22 13:10:27.891407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:08.635 [2024-07-22 13:10:27.895297] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.635 [2024-07-22 13:10:27.895416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.635 [2024-07-22 13:10:27.895435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:08.635 [2024-07-22 13:10:27.899284] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.635 [2024-07-22 13:10:27.899387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.636 [2024-07-22 13:10:27.899407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:08.636 [2024-07-22 13:10:27.903232] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.636 [2024-07-22 13:10:27.903393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.636 [2024-07-22 13:10:27.903414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:08.636 [2024-07-22 13:10:27.907208] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.636 [2024-07-22 13:10:27.907343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.636 [2024-07-22 13:10:27.907364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:08.636 [2024-07-22 13:10:27.911059] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.636 [2024-07-22 13:10:27.911152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.636 [2024-07-22 13:10:27.911172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:08.636 [2024-07-22 13:10:27.914913] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.636 [2024-07-22 13:10:27.915114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.636 [2024-07-22 13:10:27.915133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:08.636 [2024-07-22 13:10:27.918821] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.636 [2024-07-22 13:10:27.919013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.636 [2024-07-22 13:10:27.919032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:08.636 [2024-07-22 13:10:27.922717] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.636 [2024-07-22 13:10:27.922863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.636 [2024-07-22 13:10:27.922884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:08.636 [2024-07-22 13:10:27.926499] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.636 [2024-07-22 13:10:27.926592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.636 [2024-07-22 13:10:27.926653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:08.636 [2024-07-22 13:10:27.930443] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.636 [2024-07-22 13:10:27.930560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.636 [2024-07-22 13:10:27.930581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:08.636 [2024-07-22 13:10:27.934381] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.636 [2024-07-22 13:10:27.934517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.636 [2024-07-22 13:10:27.934554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:08.636 [2024-07-22 13:10:27.938254] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.636 [2024-07-22 13:10:27.938373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.636 [2024-07-22 13:10:27.938393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:08.636 [2024-07-22 13:10:27.942054] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.636 [2024-07-22 13:10:27.942162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.636 [2024-07-22 13:10:27.942182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:08.636 [2024-07-22 13:10:27.946058] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.636 [2024-07-22 13:10:27.946239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.636 [2024-07-22 13:10:27.946260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:08.636 [2024-07-22 13:10:27.949895] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.636 [2024-07-22 13:10:27.950043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.636 [2024-07-22 13:10:27.950063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:08.636 [2024-07-22 13:10:27.953889] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.636 [2024-07-22 13:10:27.954036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.636 [2024-07-22 13:10:27.954056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:08.636 [2024-07-22 13:10:27.957835] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.636 [2024-07-22 13:10:27.957947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.636 [2024-07-22 13:10:27.957967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:08.636 [2024-07-22 13:10:27.961744] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.636 [2024-07-22 13:10:27.961844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.636 [2024-07-22 13:10:27.961863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:08.636 [2024-07-22 13:10:27.965607] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.636 [2024-07-22 13:10:27.965757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.636 [2024-07-22 13:10:27.965776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:08.636 [2024-07-22 13:10:27.969583] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.636 [2024-07-22 13:10:27.969722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.636 [2024-07-22 13:10:27.969742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:08.636 [2024-07-22 13:10:27.973510] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.636 [2024-07-22 13:10:27.973623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.636 [2024-07-22 13:10:27.973643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:08.636 [2024-07-22 13:10:27.977609] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.636 [2024-07-22 13:10:27.977778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.636 [2024-07-22 13:10:27.977799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:08.636 [2024-07-22 13:10:27.981504] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.636 [2024-07-22 13:10:27.981728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.636 [2024-07-22 13:10:27.981748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:08.636 [2024-07-22 13:10:27.985554] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.636 [2024-07-22 13:10:27.985721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.636 [2024-07-22 13:10:27.985742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:08.636 [2024-07-22 13:10:27.989524] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.636 [2024-07-22 13:10:27.989662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.636 [2024-07-22 13:10:27.989682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:08.636 [2024-07-22 13:10:27.993496] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.636 [2024-07-22 13:10:27.993608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.636 [2024-07-22 13:10:27.993629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:08.636 [2024-07-22 13:10:27.997617] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.636 [2024-07-22 13:10:27.997760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.636 [2024-07-22 13:10:27.997781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:08.636 [2024-07-22 13:10:28.001620] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.636 [2024-07-22 13:10:28.001743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.636 [2024-07-22 13:10:28.001763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:08.636 [2024-07-22 13:10:28.005629] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.637 [2024-07-22 13:10:28.005726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.637 [2024-07-22 13:10:28.005747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:08.637 [2024-07-22 13:10:28.009652] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.637 [2024-07-22 13:10:28.009818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.637 [2024-07-22 13:10:28.009838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:08.637 [2024-07-22 13:10:28.013677] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.637 [2024-07-22 13:10:28.013818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.637 [2024-07-22 13:10:28.013839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:08.637 [2024-07-22 13:10:28.017759] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.637 [2024-07-22 13:10:28.017948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.637 [2024-07-22 13:10:28.017969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:08.637 [2024-07-22 13:10:28.021753] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.637 [2024-07-22 13:10:28.021858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.637 [2024-07-22 13:10:28.021895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:08.637 [2024-07-22 13:10:28.025860] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.637 [2024-07-22 13:10:28.025989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.637 [2024-07-22 13:10:28.026009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:08.637 [2024-07-22 13:10:28.029799] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.637 [2024-07-22 13:10:28.029923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.637 [2024-07-22 13:10:28.029944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:08.637 [2024-07-22 13:10:28.033831] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.637 [2024-07-22 13:10:28.033987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.637 [2024-07-22 13:10:28.034007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:08.637 [2024-07-22 13:10:28.038097] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.637 [2024-07-22 13:10:28.038246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.637 [2024-07-22 13:10:28.038280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:08.637 [2024-07-22 13:10:28.042815] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.637 [2024-07-22 13:10:28.043021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.637 [2024-07-22 13:10:28.043046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:08.637 [2024-07-22 13:10:28.047139] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.637 [2024-07-22 13:10:28.047380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.637 [2024-07-22 13:10:28.047403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:08.637 [2024-07-22 13:10:28.051609] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.637 [2024-07-22 13:10:28.051833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.637 [2024-07-22 13:10:28.051857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:08.896 [2024-07-22 13:10:28.056174] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.896 [2024-07-22 13:10:28.056311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.896 [2024-07-22 13:10:28.056334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:08.896 [2024-07-22 13:10:28.060371] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.896 [2024-07-22 13:10:28.060517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.896 [2024-07-22 13:10:28.060539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:08.896 [2024-07-22 13:10:28.064597] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.896 [2024-07-22 13:10:28.064732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.896 [2024-07-22 13:10:28.064754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:08.896 [2024-07-22 13:10:28.068700] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.897 [2024-07-22 13:10:28.068837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.897 [2024-07-22 13:10:28.068857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:08.897 [2024-07-22 13:10:28.072636] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.897 [2024-07-22 13:10:28.072730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.897 [2024-07-22 13:10:28.072751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:08.897 [2024-07-22 13:10:28.076998] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.897 [2024-07-22 13:10:28.077285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.897 [2024-07-22 13:10:28.077347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:08.897 [2024-07-22 13:10:28.081665] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.897 [2024-07-22 13:10:28.081884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.897 [2024-07-22 13:10:28.081907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:08.897 [2024-07-22 13:10:28.085627] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.897 [2024-07-22 13:10:28.085779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.897 [2024-07-22 13:10:28.085799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:08.897 [2024-07-22 13:10:28.089591] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.897 [2024-07-22 13:10:28.089711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.897 [2024-07-22 13:10:28.089732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:08.897 [2024-07-22 13:10:28.093520] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.897 [2024-07-22 13:10:28.093631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.897 [2024-07-22 13:10:28.093651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:08.897 [2024-07-22 13:10:28.097649] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.897 [2024-07-22 13:10:28.097771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.897 [2024-07-22 13:10:28.097791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:08.897 [2024-07-22 13:10:28.101775] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.897 [2024-07-22 13:10:28.101930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.897 [2024-07-22 13:10:28.101951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:08.897 [2024-07-22 13:10:28.105945] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.897 [2024-07-22 13:10:28.106042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.897 [2024-07-22 13:10:28.106062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:08.897 [2024-07-22 13:10:28.109997] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.897 [2024-07-22 13:10:28.110183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.897 [2024-07-22 13:10:28.110231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:08.897 [2024-07-22 13:10:28.113977] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.897 [2024-07-22 13:10:28.114124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.897 [2024-07-22 13:10:28.114159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:08.897 [2024-07-22 13:10:28.118065] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.897 [2024-07-22 13:10:28.118260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.897 [2024-07-22 13:10:28.118280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:08.897 [2024-07-22 13:10:28.122089] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.897 [2024-07-22 13:10:28.122248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.897 [2024-07-22 13:10:28.122269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:08.897 [2024-07-22 13:10:28.126090] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.897 [2024-07-22 13:10:28.126242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.897 [2024-07-22 13:10:28.126262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:08.897 [2024-07-22 13:10:28.130100] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.897 [2024-07-22 13:10:28.130278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.897 [2024-07-22 13:10:28.130298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:08.897 [2024-07-22 13:10:28.134053] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.897 [2024-07-22 13:10:28.134260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.897 [2024-07-22 13:10:28.134281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:08.897 [2024-07-22 13:10:28.138294] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.897 [2024-07-22 13:10:28.138410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.897 [2024-07-22 13:10:28.138431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:08.897 [2024-07-22 13:10:28.142354] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.897 [2024-07-22 13:10:28.142552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.897 [2024-07-22 13:10:28.142572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:08.897 [2024-07-22 13:10:28.146432] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.897 [2024-07-22 13:10:28.146673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.897 [2024-07-22 13:10:28.146695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:08.897 [2024-07-22 13:10:28.150515] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.897 [2024-07-22 13:10:28.150744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.897 [2024-07-22 13:10:28.150766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:08.897 [2024-07-22 13:10:28.154800] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.897 [2024-07-22 13:10:28.154910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.897 [2024-07-22 13:10:28.154945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:08.897 [2024-07-22 13:10:28.159061] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.897 [2024-07-22 13:10:28.159168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.897 [2024-07-22 13:10:28.159189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:08.897 [2024-07-22 13:10:28.162997] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.897 [2024-07-22 13:10:28.163137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.897 [2024-07-22 13:10:28.163156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:08.897 [2024-07-22 13:10:28.166986] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.897 [2024-07-22 13:10:28.167125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.897 [2024-07-22 13:10:28.167145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:08.897 [2024-07-22 13:10:28.170948] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.897 [2024-07-22 13:10:28.171074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.897 [2024-07-22 13:10:28.171095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:08.897 [2024-07-22 13:10:28.175127] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.897 [2024-07-22 13:10:28.175332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.897 [2024-07-22 13:10:28.175353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:08.897 [2024-07-22 13:10:28.179141] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.898 [2024-07-22 13:10:28.179327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.898 [2024-07-22 13:10:28.179348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:08.898 [2024-07-22 13:10:28.183154] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.898 [2024-07-22 13:10:28.183335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.898 [2024-07-22 13:10:28.183355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:08.898 [2024-07-22 13:10:28.187144] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.898 [2024-07-22 13:10:28.187281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.898 [2024-07-22 13:10:28.187302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:08.898 [2024-07-22 13:10:28.191121] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.898 [2024-07-22 13:10:28.191253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.898 [2024-07-22 13:10:28.191272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:08.898 [2024-07-22 13:10:28.195036] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.898 [2024-07-22 13:10:28.195180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.898 [2024-07-22 13:10:28.195200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:08.898 [2024-07-22 13:10:28.199009] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.898 [2024-07-22 13:10:28.199131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.898 [2024-07-22 13:10:28.199155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:08.898 [2024-07-22 13:10:28.203200] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.898 [2024-07-22 13:10:28.203329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.898 [2024-07-22 13:10:28.203352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:08.898 [2024-07-22 13:10:28.207402] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.898 [2024-07-22 13:10:28.207572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.898 [2024-07-22 13:10:28.207591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:08.898 [2024-07-22 13:10:28.211325] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.898 [2024-07-22 13:10:28.211492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.898 [2024-07-22 13:10:28.211511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:08.898 [2024-07-22 13:10:28.215410] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.898 [2024-07-22 13:10:28.215559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.898 [2024-07-22 13:10:28.215580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:08.898 [2024-07-22 13:10:28.219482] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.898 [2024-07-22 13:10:28.219608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.898 [2024-07-22 13:10:28.219627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:08.898 [2024-07-22 13:10:28.223452] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.898 [2024-07-22 13:10:28.223547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.898 [2024-07-22 13:10:28.223567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:08.898 [2024-07-22 13:10:28.227551] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.898 [2024-07-22 13:10:28.227687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.898 [2024-07-22 13:10:28.227707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:08.898 [2024-07-22 13:10:28.231561] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.898 [2024-07-22 13:10:28.231705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.898 [2024-07-22 13:10:28.231725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:08.898 [2024-07-22 13:10:28.235509] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.898 [2024-07-22 13:10:28.235600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.898 [2024-07-22 13:10:28.235620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:08.898 [2024-07-22 13:10:28.239547] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.898 [2024-07-22 13:10:28.239718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.898 [2024-07-22 13:10:28.239738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:08.898 [2024-07-22 13:10:28.243682] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.898 [2024-07-22 13:10:28.243849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.898 [2024-07-22 13:10:28.243886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:08.898 [2024-07-22 13:10:28.247753] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.898 [2024-07-22 13:10:28.247931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.898 [2024-07-22 13:10:28.247952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:08.898 [2024-07-22 13:10:28.251716] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.898 [2024-07-22 13:10:28.251836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.898 [2024-07-22 13:10:28.251856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:08.898 [2024-07-22 13:10:28.255699] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.898 [2024-07-22 13:10:28.255802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.898 [2024-07-22 13:10:28.255822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:08.898 [2024-07-22 13:10:28.259688] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.898 [2024-07-22 13:10:28.259824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.898 [2024-07-22 13:10:28.259844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:08.898 [2024-07-22 13:10:28.263678] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.898 [2024-07-22 13:10:28.263807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.898 [2024-07-22 13:10:28.263827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:08.898 [2024-07-22 13:10:28.267928] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.898 [2024-07-22 13:10:28.268049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.898 [2024-07-22 13:10:28.268070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:08.898 [2024-07-22 13:10:28.272397] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.898 [2024-07-22 13:10:28.272602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.898 [2024-07-22 13:10:28.272623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:08.898 [2024-07-22 13:10:28.276945] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.898 [2024-07-22 13:10:28.277095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.898 [2024-07-22 13:10:28.277116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:08.898 [2024-07-22 13:10:28.281616] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.898 [2024-07-22 13:10:28.281790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.898 [2024-07-22 13:10:28.281810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:08.898 [2024-07-22 13:10:28.286421] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.898 [2024-07-22 13:10:28.286588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.898 [2024-07-22 13:10:28.286635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:08.898 [2024-07-22 13:10:28.290919] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.899 [2024-07-22 13:10:28.291076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.899 [2024-07-22 13:10:28.291096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:08.899 [2024-07-22 13:10:28.295658] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.899 [2024-07-22 13:10:28.295854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.899 [2024-07-22 13:10:28.295877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:08.899 [2024-07-22 13:10:28.300497] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.899 [2024-07-22 13:10:28.300652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.899 [2024-07-22 13:10:28.300675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:08.899 [2024-07-22 13:10:28.305002] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.899 [2024-07-22 13:10:28.305113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.899 [2024-07-22 13:10:28.305135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:08.899 [2024-07-22 13:10:28.309209] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.899 [2024-07-22 13:10:28.309383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.899 [2024-07-22 13:10:28.309404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:08.899 [2024-07-22 13:10:28.313285] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:08.899 [2024-07-22 13:10:28.313494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:08.899 [2024-07-22 13:10:28.313539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:09.158 [2024-07-22 13:10:28.317780] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:09.158 [2024-07-22 13:10:28.317950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:09.158 [2024-07-22 13:10:28.317973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:09.158 [2024-07-22 13:10:28.321947] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:09.158 [2024-07-22 13:10:28.322066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:09.158 [2024-07-22 13:10:28.322089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:09.158 [2024-07-22 13:10:28.326115] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:09.158 [2024-07-22 13:10:28.326227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:09.158 [2024-07-22 13:10:28.326250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:09.158 [2024-07-22 13:10:28.330080] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:09.158 [2024-07-22 13:10:28.330240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:09.158 [2024-07-22 13:10:28.330261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:09.158 [2024-07-22 13:10:28.334379] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:09.158 [2024-07-22 13:10:28.334568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:09.158 [2024-07-22 13:10:28.334592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:09.158 [2024-07-22 13:10:28.339002] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:09.158 [2024-07-22 13:10:28.339160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:09.158 [2024-07-22 13:10:28.339183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:09.158 [2024-07-22 13:10:28.343289] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:09.158 [2024-07-22 13:10:28.343465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:09.158 [2024-07-22 13:10:28.343488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:09.158 [2024-07-22 13:10:28.347317] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:09.158 [2024-07-22 13:10:28.347472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:09.158 [2024-07-22 13:10:28.347493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:09.158 [2024-07-22 13:10:28.351328] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:09.158 [2024-07-22 13:10:28.351473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:09.158 [2024-07-22 13:10:28.351494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:09.158 [2024-07-22 13:10:28.355294] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:09.158 [2024-07-22 13:10:28.355412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:09.158 [2024-07-22 13:10:28.355433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:09.158 [2024-07-22 13:10:28.359272] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:09.158 [2024-07-22 13:10:28.359376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:09.158 [2024-07-22 13:10:28.359396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:09.158 [2024-07-22 13:10:28.363151] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:09.158 [2024-07-22 13:10:28.363299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:09.158 [2024-07-22 13:10:28.363336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:09.158 [2024-07-22 13:10:28.367193] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:09.158 [2024-07-22 13:10:28.367322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:09.158 [2024-07-22 13:10:28.367343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:09.158 [2024-07-22 13:10:28.371032] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:09.158 [2024-07-22 13:10:28.371126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:09.158 [2024-07-22 13:10:28.371145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:09.158 [2024-07-22 13:10:28.375151] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:09.158 [2024-07-22 13:10:28.375337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:09.158 [2024-07-22 13:10:28.375358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:09.158 [2024-07-22 13:10:28.379060] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:09.158 [2024-07-22 13:10:28.379211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:09.158 [2024-07-22 13:10:28.379244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:09.158 [2024-07-22 13:10:28.383305] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:09.158 [2024-07-22 13:10:28.383459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:09.158 [2024-07-22 13:10:28.383495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:09.158 [2024-07-22 13:10:28.387730] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:09.158 [2024-07-22 13:10:28.387825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:09.158 [2024-07-22 13:10:28.387846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:09.158 [2024-07-22 13:10:28.392362] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:09.158 [2024-07-22 13:10:28.392468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:09.158 [2024-07-22 13:10:28.392490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:09.158 [2024-07-22 13:10:28.396784] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:09.158 [2024-07-22 13:10:28.396908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:09.158 [2024-07-22 13:10:28.396928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:09.158 [2024-07-22 13:10:28.401367] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:09.158 [2024-07-22 13:10:28.401522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:09.158 [2024-07-22 13:10:28.401558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:09.158 [2024-07-22 13:10:28.405810] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:09.158 [2024-07-22 13:10:28.405921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:09.158 [2024-07-22 13:10:28.405941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:09.158 [2024-07-22 13:10:28.410307] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:09.158 [2024-07-22 13:10:28.410500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:09.158 [2024-07-22 13:10:28.410520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:09.158 [2024-07-22 13:10:28.414681] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:09.159 [2024-07-22 13:10:28.414852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:09.159 [2024-07-22 13:10:28.414874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:09.159 [2024-07-22 13:10:28.419216] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:09.159 [2024-07-22 13:10:28.419381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:09.159 [2024-07-22 13:10:28.419402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:09.159 [2024-07-22 13:10:28.423438] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:09.159 [2024-07-22 13:10:28.423549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:09.159 [2024-07-22 13:10:28.423569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:09.159 [2024-07-22 13:10:28.427788] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:09.159 [2024-07-22 13:10:28.427901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:09.159 [2024-07-22 13:10:28.427922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:09.159 [2024-07-22 13:10:28.432054] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:09.159 [2024-07-22 13:10:28.432202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:09.159 [2024-07-22 13:10:28.432223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:09.159 [2024-07-22 13:10:28.436202] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:09.159 [2024-07-22 13:10:28.436328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:09.159 [2024-07-22 13:10:28.436349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:09.159 [2024-07-22 13:10:28.440223] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:09.159 [2024-07-22 13:10:28.440330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:09.159 [2024-07-22 13:10:28.440350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:09.159 [2024-07-22 13:10:28.444360] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:09.159 [2024-07-22 13:10:28.444533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:09.159 [2024-07-22 13:10:28.444553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:09.159 [2024-07-22 13:10:28.448559] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:09.159 [2024-07-22 13:10:28.448705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:09.159 [2024-07-22 13:10:28.448725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:09.159 [2024-07-22 13:10:28.452643] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:09.159 [2024-07-22 13:10:28.452791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:09.159 [2024-07-22 13:10:28.452811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:09.159 [2024-07-22 13:10:28.456747] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:09.159 [2024-07-22 13:10:28.456856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:09.159 [2024-07-22 13:10:28.456876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:09.159 [2024-07-22 13:10:28.460904] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:09.159 [2024-07-22 13:10:28.461011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:09.159 [2024-07-22 13:10:28.461030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:09.159 [2024-07-22 13:10:28.465291] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:09.159 [2024-07-22 13:10:28.465425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:09.159 [2024-07-22 13:10:28.465445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:09.159 [2024-07-22 13:10:28.469444] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:09.159 [2024-07-22 13:10:28.469584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:09.159 [2024-07-22 13:10:28.469604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:09.159 [2024-07-22 13:10:28.473443] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:09.159 [2024-07-22 13:10:28.473538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:09.159 [2024-07-22 13:10:28.473558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:09.159 [2024-07-22 13:10:28.477543] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:09.159 [2024-07-22 13:10:28.477715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:09.159 [2024-07-22 13:10:28.477735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:09.159 [2024-07-22 13:10:28.481556] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:09.159 [2024-07-22 13:10:28.481740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:09.159 [2024-07-22 13:10:28.481760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:09.159 [2024-07-22 13:10:28.485764] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:09.159 [2024-07-22 13:10:28.485931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:09.159 [2024-07-22 13:10:28.485951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:09.159 [2024-07-22 13:10:28.489779] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:09.159 [2024-07-22 13:10:28.489893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:09.159 [2024-07-22 13:10:28.489913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:09.159 [2024-07-22 13:10:28.493843] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:09.159 [2024-07-22 13:10:28.493964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:09.159 [2024-07-22 13:10:28.493984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:09.159 [2024-07-22 13:10:28.497972] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:09.159 [2024-07-22 13:10:28.498097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:09.159 [2024-07-22 13:10:28.498118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:09.159 [2024-07-22 13:10:28.502234] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:09.159 [2024-07-22 13:10:28.502377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:09.159 [2024-07-22 13:10:28.502397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:09.159 [2024-07-22 13:10:28.506247] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:09.159 [2024-07-22 13:10:28.506366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:09.159 [2024-07-22 13:10:28.506386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:09.159 [2024-07-22 13:10:28.510310] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:09.159 [2024-07-22 13:10:28.510488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:09.159 [2024-07-22 13:10:28.510508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:09.159 [2024-07-22 13:10:28.514360] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:09.159 [2024-07-22 13:10:28.514515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:09.159 [2024-07-22 13:10:28.514536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:09.159 [2024-07-22 13:10:28.518469] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:09.159 [2024-07-22 13:10:28.518657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:09.159 [2024-07-22 13:10:28.518679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:09.159 [2024-07-22 13:10:28.522572] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:09.159 [2024-07-22 13:10:28.522697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:09.159 [2024-07-22 13:10:28.522720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:09.159 [2024-07-22 13:10:28.526720] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:09.160 [2024-07-22 13:10:28.526827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:09.160 [2024-07-22 13:10:28.526849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:09.160 [2024-07-22 13:10:28.530844] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:09.160 [2024-07-22 13:10:28.531028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:09.160 [2024-07-22 13:10:28.531048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:09.160 [2024-07-22 13:10:28.534984] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:09.160 [2024-07-22 13:10:28.535123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:09.160 [2024-07-22 13:10:28.535143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:09.160 [2024-07-22 13:10:28.539343] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:09.160 [2024-07-22 13:10:28.539453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:09.160 [2024-07-22 13:10:28.539474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:09.160 [2024-07-22 13:10:28.543471] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:09.160 [2024-07-22 13:10:28.543643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:09.160 [2024-07-22 13:10:28.543663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:09.160 [2024-07-22 13:10:28.547632] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:09.160 [2024-07-22 13:10:28.547779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:09.160 [2024-07-22 13:10:28.547799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:48:09.160 [2024-07-22 13:10:28.551721] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:09.160 [2024-07-22 13:10:28.551886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:09.160 [2024-07-22 13:10:28.551907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:48:09.160 [2024-07-22 13:10:28.556447] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:09.160 [2024-07-22 13:10:28.556547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:09.160 [2024-07-22 13:10:28.556570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:48:09.160 [2024-07-22 13:10:28.560969] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5d1e90) with pdu=0x2000190fef90 00:48:09.160 [2024-07-22 13:10:28.561083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:09.160 [2024-07-22 13:10:28.561106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:48:09.160 00:48:09.160 Latency(us) 00:48:09.160 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:48:09.160 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:48:09.160 nvme0n1 : 2.00 7398.84 924.86 0.00 0.00 2157.64 1653.29 6076.97 00:48:09.160 =================================================================================================================== 00:48:09.160 Total : 7398.84 924.86 0.00 0.00 2157.64 1653.29 6076.97 00:48:09.160 0 00:48:09.417 13:10:28 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:48:09.417 13:10:28 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:48:09.417 13:10:28 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:48:09.417 13:10:28 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:48:09.417 | .driver_specific 00:48:09.417 | .nvme_error 00:48:09.417 | .status_code 00:48:09.417 | .command_transient_transport_error' 00:48:09.674 13:10:28 -- host/digest.sh@71 -- # (( 477 > 0 )) 00:48:09.674 13:10:28 -- host/digest.sh@73 -- # killprocess 97217 00:48:09.674 13:10:28 -- common/autotest_common.sh@926 -- # '[' -z 97217 ']' 00:48:09.674 13:10:28 -- common/autotest_common.sh@930 -- # kill -0 97217 00:48:09.674 13:10:28 -- common/autotest_common.sh@931 -- # uname 00:48:09.674 13:10:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:48:09.674 13:10:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 97217 00:48:09.674 killing process with pid 97217 00:48:09.674 Received shutdown signal, test time was about 2.000000 seconds 00:48:09.674 00:48:09.674 Latency(us) 00:48:09.674 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:48:09.674 =================================================================================================================== 00:48:09.674 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:48:09.674 13:10:28 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:48:09.674 13:10:28 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:48:09.674 13:10:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 97217' 00:48:09.674 13:10:28 -- common/autotest_common.sh@945 -- # kill 97217 00:48:09.674 13:10:28 -- common/autotest_common.sh@950 -- # wait 97217 00:48:09.674 13:10:29 -- host/digest.sh@115 -- # killprocess 96909 00:48:09.674 13:10:29 -- common/autotest_common.sh@926 -- # '[' -z 96909 ']' 00:48:09.674 13:10:29 -- common/autotest_common.sh@930 -- # kill -0 96909 00:48:09.674 13:10:29 -- common/autotest_common.sh@931 -- # uname 00:48:09.674 13:10:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:48:09.674 13:10:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 96909 00:48:09.932 killing process with pid 96909 00:48:09.933 13:10:29 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:48:09.933 13:10:29 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:48:09.933 13:10:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 96909' 00:48:09.933 13:10:29 -- common/autotest_common.sh@945 -- # kill 96909 00:48:09.933 13:10:29 -- common/autotest_common.sh@950 -- # wait 96909 00:48:09.933 ************************************ 00:48:09.933 END TEST nvmf_digest_error 00:48:09.933 ************************************ 00:48:09.933 00:48:09.933 real 0m18.009s 00:48:09.933 user 0m33.921s 00:48:09.933 sys 0m4.779s 00:48:09.933 13:10:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:48:09.933 13:10:29 -- common/autotest_common.sh@10 -- # set +x 00:48:09.933 13:10:29 -- host/digest.sh@138 -- # trap - SIGINT SIGTERM EXIT 00:48:09.933 13:10:29 -- host/digest.sh@139 -- # nvmftestfini 00:48:09.933 13:10:29 -- nvmf/common.sh@476 -- # nvmfcleanup 00:48:09.933 13:10:29 -- nvmf/common.sh@116 -- # sync 00:48:10.191 13:10:29 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:48:10.191 13:10:29 -- nvmf/common.sh@119 -- # set +e 00:48:10.191 13:10:29 -- nvmf/common.sh@120 -- # for i in {1..20} 00:48:10.191 13:10:29 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:48:10.191 rmmod nvme_tcp 00:48:10.191 rmmod nvme_fabrics 00:48:10.191 rmmod nvme_keyring 00:48:10.191 13:10:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:48:10.191 Process with pid 96909 is not found 00:48:10.191 13:10:29 -- nvmf/common.sh@123 -- # set -e 00:48:10.191 13:10:29 -- nvmf/common.sh@124 -- # return 0 00:48:10.191 13:10:29 -- nvmf/common.sh@477 -- # '[' -n 96909 ']' 00:48:10.191 13:10:29 -- nvmf/common.sh@478 -- # killprocess 96909 00:48:10.191 13:10:29 -- common/autotest_common.sh@926 -- # '[' -z 96909 ']' 00:48:10.191 13:10:29 -- common/autotest_common.sh@930 -- # kill -0 96909 00:48:10.191 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (96909) - No such process 00:48:10.191 13:10:29 -- common/autotest_common.sh@953 -- # echo 'Process with pid 96909 is not found' 00:48:10.191 13:10:29 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:48:10.191 13:10:29 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:48:10.191 13:10:29 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:48:10.191 13:10:29 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:48:10.191 13:10:29 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:48:10.191 13:10:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:48:10.191 13:10:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:48:10.191 13:10:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:48:10.191 13:10:29 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:48:10.191 00:48:10.191 real 0m37.088s 00:48:10.191 user 1m8.888s 00:48:10.191 sys 0m9.799s 00:48:10.191 ************************************ 00:48:10.191 END TEST nvmf_digest 00:48:10.191 ************************************ 00:48:10.191 13:10:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:48:10.191 13:10:29 -- common/autotest_common.sh@10 -- # set +x 00:48:10.191 13:10:29 -- nvmf/nvmf.sh@110 -- # [[ 1 -eq 1 ]] 00:48:10.191 13:10:29 -- nvmf/nvmf.sh@110 -- # [[ tcp == \t\c\p ]] 00:48:10.191 13:10:29 -- nvmf/nvmf.sh@112 -- # run_test nvmf_mdns_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:48:10.191 13:10:29 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:48:10.191 13:10:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:48:10.191 13:10:29 -- common/autotest_common.sh@10 -- # set +x 00:48:10.191 ************************************ 00:48:10.191 START TEST nvmf_mdns_discovery 00:48:10.191 ************************************ 00:48:10.191 13:10:29 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:48:10.450 * Looking for test storage... 00:48:10.450 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:48:10.450 13:10:29 -- host/mdns_discovery.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:48:10.450 13:10:29 -- nvmf/common.sh@7 -- # uname -s 00:48:10.450 13:10:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:48:10.450 13:10:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:48:10.450 13:10:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:48:10.450 13:10:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:48:10.450 13:10:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:48:10.450 13:10:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:48:10.450 13:10:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:48:10.450 13:10:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:48:10.450 13:10:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:48:10.450 13:10:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:48:10.450 13:10:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:864ae4a3-cd96-439b-8ef2-6dab4d992115 00:48:10.450 13:10:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=864ae4a3-cd96-439b-8ef2-6dab4d992115 00:48:10.450 13:10:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:48:10.450 13:10:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:48:10.450 13:10:29 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:48:10.450 13:10:29 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:48:10.450 13:10:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:48:10.450 13:10:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:48:10.450 13:10:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:48:10.450 13:10:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:10.451 13:10:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:10.451 13:10:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:10.451 13:10:29 -- paths/export.sh@5 -- # export PATH 00:48:10.451 13:10:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:10.451 13:10:29 -- nvmf/common.sh@46 -- # : 0 00:48:10.451 13:10:29 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:48:10.451 13:10:29 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:48:10.451 13:10:29 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:48:10.451 13:10:29 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:48:10.451 13:10:29 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:48:10.451 13:10:29 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:48:10.451 13:10:29 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:48:10.451 13:10:29 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:48:10.451 13:10:29 -- host/mdns_discovery.sh@12 -- # DISCOVERY_FILTER=address 00:48:10.451 13:10:29 -- host/mdns_discovery.sh@13 -- # DISCOVERY_PORT=8009 00:48:10.451 13:10:29 -- host/mdns_discovery.sh@14 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:48:10.451 13:10:29 -- host/mdns_discovery.sh@17 -- # NQN=nqn.2016-06.io.spdk:cnode 00:48:10.451 13:10:29 -- host/mdns_discovery.sh@18 -- # NQN2=nqn.2016-06.io.spdk:cnode2 00:48:10.451 13:10:29 -- host/mdns_discovery.sh@20 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:48:10.451 13:10:29 -- host/mdns_discovery.sh@21 -- # HOST_SOCK=/tmp/host.sock 00:48:10.451 13:10:29 -- host/mdns_discovery.sh@23 -- # nvmftestinit 00:48:10.451 13:10:29 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:48:10.451 13:10:29 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:48:10.451 13:10:29 -- nvmf/common.sh@436 -- # prepare_net_devs 00:48:10.451 13:10:29 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:48:10.451 13:10:29 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:48:10.451 13:10:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:48:10.451 13:10:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:48:10.451 13:10:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:48:10.451 13:10:29 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:48:10.451 13:10:29 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:48:10.451 13:10:29 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:48:10.451 13:10:29 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:48:10.451 13:10:29 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:48:10.451 13:10:29 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:48:10.451 13:10:29 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:48:10.451 13:10:29 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:48:10.451 13:10:29 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:48:10.451 13:10:29 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:48:10.451 13:10:29 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:48:10.451 13:10:29 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:48:10.451 13:10:29 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:48:10.451 13:10:29 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:48:10.451 13:10:29 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:48:10.451 13:10:29 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:48:10.451 13:10:29 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:48:10.451 13:10:29 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:48:10.451 13:10:29 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:48:10.451 13:10:29 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:48:10.451 Cannot find device "nvmf_tgt_br" 00:48:10.451 13:10:29 -- nvmf/common.sh@154 -- # true 00:48:10.451 13:10:29 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:48:10.451 Cannot find device "nvmf_tgt_br2" 00:48:10.451 13:10:29 -- nvmf/common.sh@155 -- # true 00:48:10.451 13:10:29 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:48:10.451 13:10:29 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:48:10.451 Cannot find device "nvmf_tgt_br" 00:48:10.451 13:10:29 -- nvmf/common.sh@157 -- # true 00:48:10.451 13:10:29 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:48:10.451 Cannot find device "nvmf_tgt_br2" 00:48:10.451 13:10:29 -- nvmf/common.sh@158 -- # true 00:48:10.451 13:10:29 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:48:10.451 13:10:29 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:48:10.451 13:10:29 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:48:10.451 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:48:10.451 13:10:29 -- nvmf/common.sh@161 -- # true 00:48:10.451 13:10:29 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:48:10.451 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:48:10.451 13:10:29 -- nvmf/common.sh@162 -- # true 00:48:10.451 13:10:29 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:48:10.451 13:10:29 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:48:10.451 13:10:29 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:48:10.451 13:10:29 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:48:10.451 13:10:29 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:48:10.451 13:10:29 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:48:10.710 13:10:29 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:48:10.710 13:10:29 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:48:10.710 13:10:29 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:48:10.710 13:10:29 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:48:10.710 13:10:29 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:48:10.710 13:10:29 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:48:10.710 13:10:29 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:48:10.710 13:10:29 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:48:10.710 13:10:29 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:48:10.710 13:10:29 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:48:10.710 13:10:29 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:48:10.710 13:10:29 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:48:10.710 13:10:29 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:48:10.710 13:10:29 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:48:10.710 13:10:29 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:48:10.710 13:10:29 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:48:10.710 13:10:29 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:48:10.710 13:10:29 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:48:10.710 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:48:10.710 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.351 ms 00:48:10.710 00:48:10.710 --- 10.0.0.2 ping statistics --- 00:48:10.710 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:48:10.710 rtt min/avg/max/mdev = 0.351/0.351/0.351/0.000 ms 00:48:10.710 13:10:29 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:48:10.710 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:48:10.710 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:48:10.710 00:48:10.710 --- 10.0.0.3 ping statistics --- 00:48:10.710 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:48:10.710 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:48:10.710 13:10:29 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:48:10.710 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:48:10.710 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:48:10.710 00:48:10.710 --- 10.0.0.1 ping statistics --- 00:48:10.710 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:48:10.710 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:48:10.710 13:10:29 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:48:10.710 13:10:29 -- nvmf/common.sh@421 -- # return 0 00:48:10.710 13:10:29 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:48:10.710 13:10:29 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:48:10.710 13:10:29 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:48:10.710 13:10:29 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:48:10.710 13:10:29 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:48:10.710 13:10:29 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:48:10.710 13:10:29 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:48:10.710 13:10:30 -- host/mdns_discovery.sh@28 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:48:10.710 13:10:30 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:48:10.710 13:10:30 -- common/autotest_common.sh@712 -- # xtrace_disable 00:48:10.710 13:10:30 -- common/autotest_common.sh@10 -- # set +x 00:48:10.710 13:10:30 -- nvmf/common.sh@469 -- # nvmfpid=97507 00:48:10.710 13:10:30 -- nvmf/common.sh@470 -- # waitforlisten 97507 00:48:10.710 13:10:30 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:48:10.710 13:10:30 -- common/autotest_common.sh@819 -- # '[' -z 97507 ']' 00:48:10.710 13:10:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:48:10.710 13:10:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:48:10.710 13:10:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:48:10.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:48:10.710 13:10:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:48:10.710 13:10:30 -- common/autotest_common.sh@10 -- # set +x 00:48:10.710 [2024-07-22 13:10:30.066652] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:48:10.710 [2024-07-22 13:10:30.066744] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:48:11.002 [2024-07-22 13:10:30.207575] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:11.002 [2024-07-22 13:10:30.285198] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:48:11.002 [2024-07-22 13:10:30.285340] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:48:11.002 [2024-07-22 13:10:30.285353] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:48:11.002 [2024-07-22 13:10:30.285362] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:48:11.002 [2024-07-22 13:10:30.285386] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:48:11.936 13:10:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:48:11.936 13:10:31 -- common/autotest_common.sh@852 -- # return 0 00:48:11.936 13:10:31 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:48:11.936 13:10:31 -- common/autotest_common.sh@718 -- # xtrace_disable 00:48:11.936 13:10:31 -- common/autotest_common.sh@10 -- # set +x 00:48:11.936 13:10:31 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:48:11.936 13:10:31 -- host/mdns_discovery.sh@30 -- # rpc_cmd nvmf_set_config --discovery-filter=address 00:48:11.936 13:10:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:48:11.936 13:10:31 -- common/autotest_common.sh@10 -- # set +x 00:48:11.936 13:10:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:48:11.936 13:10:31 -- host/mdns_discovery.sh@31 -- # rpc_cmd framework_start_init 00:48:11.936 13:10:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:48:11.937 13:10:31 -- common/autotest_common.sh@10 -- # set +x 00:48:11.937 13:10:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:48:11.937 13:10:31 -- host/mdns_discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:48:11.937 13:10:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:48:11.937 13:10:31 -- common/autotest_common.sh@10 -- # set +x 00:48:11.937 [2024-07-22 13:10:31.210887] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:48:11.937 13:10:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:48:11.937 13:10:31 -- host/mdns_discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:48:11.937 13:10:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:48:11.937 13:10:31 -- common/autotest_common.sh@10 -- # set +x 00:48:11.937 [2024-07-22 13:10:31.223025] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:48:11.937 13:10:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:48:11.937 13:10:31 -- host/mdns_discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:48:11.937 13:10:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:48:11.937 13:10:31 -- common/autotest_common.sh@10 -- # set +x 00:48:11.937 null0 00:48:11.937 13:10:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:48:11.937 13:10:31 -- host/mdns_discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:48:11.937 13:10:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:48:11.937 13:10:31 -- common/autotest_common.sh@10 -- # set +x 00:48:11.937 null1 00:48:11.937 13:10:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:48:11.937 13:10:31 -- host/mdns_discovery.sh@37 -- # rpc_cmd bdev_null_create null2 1000 512 00:48:11.937 13:10:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:48:11.937 13:10:31 -- common/autotest_common.sh@10 -- # set +x 00:48:11.937 null2 00:48:11.937 13:10:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:48:11.937 13:10:31 -- host/mdns_discovery.sh@38 -- # rpc_cmd bdev_null_create null3 1000 512 00:48:11.937 13:10:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:48:11.937 13:10:31 -- common/autotest_common.sh@10 -- # set +x 00:48:11.937 null3 00:48:11.937 13:10:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:48:11.937 13:10:31 -- host/mdns_discovery.sh@39 -- # rpc_cmd bdev_wait_for_examine 00:48:11.937 13:10:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:48:11.937 13:10:31 -- common/autotest_common.sh@10 -- # set +x 00:48:11.937 13:10:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:48:11.937 13:10:31 -- host/mdns_discovery.sh@47 -- # hostpid=97557 00:48:11.937 13:10:31 -- host/mdns_discovery.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:48:11.937 13:10:31 -- host/mdns_discovery.sh@48 -- # waitforlisten 97557 /tmp/host.sock 00:48:11.937 13:10:31 -- common/autotest_common.sh@819 -- # '[' -z 97557 ']' 00:48:11.937 13:10:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:48:11.937 13:10:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:48:11.937 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:48:11.937 13:10:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:48:11.937 13:10:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:48:11.937 13:10:31 -- common/autotest_common.sh@10 -- # set +x 00:48:11.937 [2024-07-22 13:10:31.333867] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:48:11.937 [2024-07-22 13:10:31.333968] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97557 ] 00:48:12.195 [2024-07-22 13:10:31.477256] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:12.195 [2024-07-22 13:10:31.551465] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:48:12.195 [2024-07-22 13:10:31.551668] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:48:13.130 13:10:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:48:13.130 13:10:32 -- common/autotest_common.sh@852 -- # return 0 00:48:13.130 13:10:32 -- host/mdns_discovery.sh@50 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;exit 1' SIGINT SIGTERM 00:48:13.130 13:10:32 -- host/mdns_discovery.sh@51 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;nvmftestfini;kill $hostpid;kill $avahi_clientpid;kill $avahipid;' EXIT 00:48:13.130 13:10:32 -- host/mdns_discovery.sh@55 -- # avahi-daemon --kill 00:48:13.130 13:10:32 -- host/mdns_discovery.sh@57 -- # avahipid=97586 00:48:13.130 13:10:32 -- host/mdns_discovery.sh@58 -- # sleep 1 00:48:13.130 13:10:32 -- host/mdns_discovery.sh@56 -- # echo -e '[server]\nallow-interfaces=nvmf_tgt_if,nvmf_tgt_if2\nuse-ipv4=yes\nuse-ipv6=no' 00:48:13.130 13:10:32 -- host/mdns_discovery.sh@56 -- # ip netns exec nvmf_tgt_ns_spdk avahi-daemon -f /dev/fd/63 00:48:13.130 Process 983 died: No such process; trying to remove PID file. (/run/avahi-daemon//pid) 00:48:13.130 Found user 'avahi' (UID 70) and group 'avahi' (GID 70). 00:48:13.130 Successfully dropped root privileges. 00:48:13.130 avahi-daemon 0.8 starting up. 00:48:13.130 WARNING: No NSS support for mDNS detected, consider installing nss-mdns! 00:48:13.130 Successfully called chroot(). 00:48:13.130 Successfully dropped remaining capabilities. 00:48:13.130 No service file found in /etc/avahi/services. 00:48:14.065 Joining mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:48:14.065 New relevant interface nvmf_tgt_if2.IPv4 for mDNS. 00:48:14.065 Joining mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:48:14.065 New relevant interface nvmf_tgt_if.IPv4 for mDNS. 00:48:14.065 Network interface enumeration completed. 00:48:14.065 Registering new address record for fe80::587a:63ff:fef9:f6a7 on nvmf_tgt_if2.*. 00:48:14.065 Registering new address record for 10.0.0.3 on nvmf_tgt_if2.IPv4. 00:48:14.065 Registering new address record for fe80::e073:5fff:fecc:6446 on nvmf_tgt_if.*. 00:48:14.065 Registering new address record for 10.0.0.2 on nvmf_tgt_if.IPv4. 00:48:14.065 Server startup complete. Host name is fedora38-cloud-1716830599-074-updated-1705279005.local. Local service cookie is 4080551034. 00:48:14.065 13:10:33 -- host/mdns_discovery.sh@60 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:48:14.065 13:10:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:48:14.065 13:10:33 -- common/autotest_common.sh@10 -- # set +x 00:48:14.065 13:10:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:48:14.066 13:10:33 -- host/mdns_discovery.sh@61 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:48:14.066 13:10:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:48:14.066 13:10:33 -- common/autotest_common.sh@10 -- # set +x 00:48:14.066 13:10:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:48:14.066 13:10:33 -- host/mdns_discovery.sh@85 -- # notify_id=0 00:48:14.066 13:10:33 -- host/mdns_discovery.sh@91 -- # get_subsystem_names 00:48:14.066 13:10:33 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:48:14.066 13:10:33 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:48:14.066 13:10:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:48:14.066 13:10:33 -- common/autotest_common.sh@10 -- # set +x 00:48:14.066 13:10:33 -- host/mdns_discovery.sh@68 -- # sort 00:48:14.066 13:10:33 -- host/mdns_discovery.sh@68 -- # xargs 00:48:14.066 13:10:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:48:14.066 13:10:33 -- host/mdns_discovery.sh@91 -- # [[ '' == '' ]] 00:48:14.066 13:10:33 -- host/mdns_discovery.sh@92 -- # get_bdev_list 00:48:14.066 13:10:33 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:48:14.066 13:10:33 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:48:14.066 13:10:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:48:14.066 13:10:33 -- host/mdns_discovery.sh@64 -- # sort 00:48:14.066 13:10:33 -- common/autotest_common.sh@10 -- # set +x 00:48:14.066 13:10:33 -- host/mdns_discovery.sh@64 -- # xargs 00:48:14.066 13:10:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:48:14.066 13:10:33 -- host/mdns_discovery.sh@92 -- # [[ '' == '' ]] 00:48:14.066 13:10:33 -- host/mdns_discovery.sh@94 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:48:14.066 13:10:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:48:14.066 13:10:33 -- common/autotest_common.sh@10 -- # set +x 00:48:14.066 13:10:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:48:14.066 13:10:33 -- host/mdns_discovery.sh@95 -- # get_subsystem_names 00:48:14.066 13:10:33 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:48:14.066 13:10:33 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:48:14.066 13:10:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:48:14.066 13:10:33 -- common/autotest_common.sh@10 -- # set +x 00:48:14.066 13:10:33 -- host/mdns_discovery.sh@68 -- # xargs 00:48:14.066 13:10:33 -- host/mdns_discovery.sh@68 -- # sort 00:48:14.066 13:10:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:48:14.324 13:10:33 -- host/mdns_discovery.sh@95 -- # [[ '' == '' ]] 00:48:14.324 13:10:33 -- host/mdns_discovery.sh@96 -- # get_bdev_list 00:48:14.324 13:10:33 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:48:14.324 13:10:33 -- host/mdns_discovery.sh@64 -- # sort 00:48:14.324 13:10:33 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:48:14.324 13:10:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:48:14.324 13:10:33 -- common/autotest_common.sh@10 -- # set +x 00:48:14.324 13:10:33 -- host/mdns_discovery.sh@64 -- # xargs 00:48:14.324 13:10:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:48:14.324 13:10:33 -- host/mdns_discovery.sh@96 -- # [[ '' == '' ]] 00:48:14.324 13:10:33 -- host/mdns_discovery.sh@98 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:48:14.324 13:10:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:48:14.324 13:10:33 -- common/autotest_common.sh@10 -- # set +x 00:48:14.324 13:10:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:48:14.324 13:10:33 -- host/mdns_discovery.sh@99 -- # get_subsystem_names 00:48:14.324 13:10:33 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:48:14.324 13:10:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:48:14.324 13:10:33 -- common/autotest_common.sh@10 -- # set +x 00:48:14.324 13:10:33 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:48:14.324 13:10:33 -- host/mdns_discovery.sh@68 -- # sort 00:48:14.324 13:10:33 -- host/mdns_discovery.sh@68 -- # xargs 00:48:14.324 13:10:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:48:14.324 [2024-07-22 13:10:33.633332] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:48:14.324 13:10:33 -- host/mdns_discovery.sh@99 -- # [[ '' == '' ]] 00:48:14.324 13:10:33 -- host/mdns_discovery.sh@100 -- # get_bdev_list 00:48:14.324 13:10:33 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:48:14.324 13:10:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:48:14.324 13:10:33 -- common/autotest_common.sh@10 -- # set +x 00:48:14.324 13:10:33 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:48:14.324 13:10:33 -- host/mdns_discovery.sh@64 -- # sort 00:48:14.324 13:10:33 -- host/mdns_discovery.sh@64 -- # xargs 00:48:14.324 13:10:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:48:14.324 13:10:33 -- host/mdns_discovery.sh@100 -- # [[ '' == '' ]] 00:48:14.324 13:10:33 -- host/mdns_discovery.sh@104 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:48:14.324 13:10:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:48:14.324 13:10:33 -- common/autotest_common.sh@10 -- # set +x 00:48:14.324 [2024-07-22 13:10:33.707864] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:48:14.324 13:10:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:48:14.324 13:10:33 -- host/mdns_discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:48:14.324 13:10:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:48:14.324 13:10:33 -- common/autotest_common.sh@10 -- # set +x 00:48:14.324 13:10:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:48:14.324 13:10:33 -- host/mdns_discovery.sh@111 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20 00:48:14.324 13:10:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:48:14.324 13:10:33 -- common/autotest_common.sh@10 -- # set +x 00:48:14.324 13:10:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:48:14.324 13:10:33 -- host/mdns_discovery.sh@112 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null2 00:48:14.324 13:10:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:48:14.324 13:10:33 -- common/autotest_common.sh@10 -- # set +x 00:48:14.324 13:10:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:48:14.324 13:10:33 -- host/mdns_discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode20 nqn.2021-12.io.spdk:test 00:48:14.324 13:10:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:48:14.324 13:10:33 -- common/autotest_common.sh@10 -- # set +x 00:48:14.324 13:10:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:48:14.324 13:10:33 -- host/mdns_discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:48:14.324 13:10:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:48:14.324 13:10:33 -- common/autotest_common.sh@10 -- # set +x 00:48:14.582 [2024-07-22 13:10:33.747854] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:48:14.582 13:10:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:48:14.582 13:10:33 -- host/mdns_discovery.sh@120 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:48:14.582 13:10:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:48:14.582 13:10:33 -- common/autotest_common.sh@10 -- # set +x 00:48:14.582 [2024-07-22 13:10:33.755808] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:48:14.582 13:10:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:48:14.582 13:10:33 -- host/mdns_discovery.sh@124 -- # avahi_clientpid=97643 00:48:14.582 13:10:33 -- host/mdns_discovery.sh@123 -- # ip netns exec nvmf_tgt_ns_spdk /usr/bin/avahi-publish --domain=local --service CDC _nvme-disc._tcp 8009 NQN=nqn.2014-08.org.nvmexpress.discovery p=tcp 00:48:14.582 13:10:33 -- host/mdns_discovery.sh@125 -- # sleep 5 00:48:15.147 [2024-07-22 13:10:34.533338] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:48:15.405 Established under name 'CDC' 00:48:15.663 [2024-07-22 13:10:34.933351] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:48:15.663 [2024-07-22 13:10:34.933377] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:48:15.663 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:48:15.663 cookie is 0 00:48:15.663 is_local: 1 00:48:15.663 our_own: 0 00:48:15.663 wide_area: 0 00:48:15.663 multicast: 1 00:48:15.663 cached: 1 00:48:15.663 [2024-07-22 13:10:35.033345] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:48:15.663 [2024-07-22 13:10:35.033367] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:48:15.663 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:48:15.663 cookie is 0 00:48:15.663 is_local: 1 00:48:15.663 our_own: 0 00:48:15.663 wide_area: 0 00:48:15.663 multicast: 1 00:48:15.663 cached: 1 00:48:16.599 [2024-07-22 13:10:35.944182] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:48:16.599 [2024-07-22 13:10:35.944226] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:48:16.599 [2024-07-22 13:10:35.944243] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:48:16.858 [2024-07-22 13:10:36.030295] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 new subsystem mdns0_nvme0 00:48:16.858 [2024-07-22 13:10:36.043908] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:48:16.858 [2024-07-22 13:10:36.043929] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:48:16.858 [2024-07-22 13:10:36.043959] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:48:16.858 [2024-07-22 13:10:36.094661] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:48:16.858 [2024-07-22 13:10:36.094704] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:48:16.858 [2024-07-22 13:10:36.129774] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem mdns1_nvme0 00:48:16.858 [2024-07-22 13:10:36.184302] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:48:16.858 [2024-07-22 13:10:36.184329] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:48:19.421 13:10:38 -- host/mdns_discovery.sh@127 -- # get_mdns_discovery_svcs 00:48:19.421 13:10:38 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:48:19.421 13:10:38 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:48:19.421 13:10:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:48:19.421 13:10:38 -- host/mdns_discovery.sh@80 -- # sort 00:48:19.421 13:10:38 -- common/autotest_common.sh@10 -- # set +x 00:48:19.421 13:10:38 -- host/mdns_discovery.sh@80 -- # xargs 00:48:19.421 13:10:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:48:19.421 13:10:38 -- host/mdns_discovery.sh@127 -- # [[ mdns == \m\d\n\s ]] 00:48:19.421 13:10:38 -- host/mdns_discovery.sh@128 -- # get_discovery_ctrlrs 00:48:19.421 13:10:38 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:48:19.421 13:10:38 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:48:19.421 13:10:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:48:19.421 13:10:38 -- common/autotest_common.sh@10 -- # set +x 00:48:19.421 13:10:38 -- host/mdns_discovery.sh@76 -- # xargs 00:48:19.421 13:10:38 -- host/mdns_discovery.sh@76 -- # sort 00:48:19.678 13:10:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:48:19.679 13:10:38 -- host/mdns_discovery.sh@128 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:48:19.679 13:10:38 -- host/mdns_discovery.sh@129 -- # get_subsystem_names 00:48:19.679 13:10:38 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:48:19.679 13:10:38 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:48:19.679 13:10:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:48:19.679 13:10:38 -- host/mdns_discovery.sh@68 -- # sort 00:48:19.679 13:10:38 -- common/autotest_common.sh@10 -- # set +x 00:48:19.679 13:10:38 -- host/mdns_discovery.sh@68 -- # xargs 00:48:19.679 13:10:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:48:19.679 13:10:38 -- host/mdns_discovery.sh@129 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:48:19.679 13:10:38 -- host/mdns_discovery.sh@130 -- # get_bdev_list 00:48:19.679 13:10:38 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:48:19.679 13:10:38 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:48:19.679 13:10:38 -- host/mdns_discovery.sh@64 -- # sort 00:48:19.679 13:10:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:48:19.679 13:10:38 -- common/autotest_common.sh@10 -- # set +x 00:48:19.679 13:10:38 -- host/mdns_discovery.sh@64 -- # xargs 00:48:19.679 13:10:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:48:19.679 13:10:38 -- host/mdns_discovery.sh@130 -- # [[ mdns0_nvme0n1 mdns1_nvme0n1 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\1 ]] 00:48:19.679 13:10:38 -- host/mdns_discovery.sh@131 -- # get_subsystem_paths mdns0_nvme0 00:48:19.679 13:10:38 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:48:19.679 13:10:38 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:48:19.679 13:10:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:48:19.679 13:10:38 -- common/autotest_common.sh@10 -- # set +x 00:48:19.679 13:10:38 -- host/mdns_discovery.sh@72 -- # sort -n 00:48:19.679 13:10:38 -- host/mdns_discovery.sh@72 -- # xargs 00:48:19.679 13:10:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:48:19.679 13:10:39 -- host/mdns_discovery.sh@131 -- # [[ 4420 == \4\4\2\0 ]] 00:48:19.679 13:10:39 -- host/mdns_discovery.sh@132 -- # get_subsystem_paths mdns1_nvme0 00:48:19.679 13:10:39 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:48:19.679 13:10:39 -- host/mdns_discovery.sh@72 -- # sort -n 00:48:19.679 13:10:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:48:19.679 13:10:39 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:48:19.679 13:10:39 -- common/autotest_common.sh@10 -- # set +x 00:48:19.679 13:10:39 -- host/mdns_discovery.sh@72 -- # xargs 00:48:19.679 13:10:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:48:19.937 13:10:39 -- host/mdns_discovery.sh@132 -- # [[ 4420 == \4\4\2\0 ]] 00:48:19.937 13:10:39 -- host/mdns_discovery.sh@133 -- # get_notification_count 00:48:19.937 13:10:39 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:48:19.937 13:10:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:48:19.937 13:10:39 -- common/autotest_common.sh@10 -- # set +x 00:48:19.937 13:10:39 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:48:19.937 13:10:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:48:19.937 13:10:39 -- host/mdns_discovery.sh@87 -- # notification_count=2 00:48:19.937 13:10:39 -- host/mdns_discovery.sh@88 -- # notify_id=2 00:48:19.937 13:10:39 -- host/mdns_discovery.sh@134 -- # [[ 2 == 2 ]] 00:48:19.937 13:10:39 -- host/mdns_discovery.sh@137 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:48:19.937 13:10:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:48:19.937 13:10:39 -- common/autotest_common.sh@10 -- # set +x 00:48:19.937 13:10:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:48:19.937 13:10:39 -- host/mdns_discovery.sh@138 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null3 00:48:19.937 13:10:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:48:19.937 13:10:39 -- common/autotest_common.sh@10 -- # set +x 00:48:19.937 13:10:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:48:19.937 13:10:39 -- host/mdns_discovery.sh@139 -- # sleep 1 00:48:20.870 13:10:40 -- host/mdns_discovery.sh@141 -- # get_bdev_list 00:48:20.870 13:10:40 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:48:20.870 13:10:40 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:48:20.870 13:10:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:48:20.870 13:10:40 -- host/mdns_discovery.sh@64 -- # sort 00:48:20.870 13:10:40 -- common/autotest_common.sh@10 -- # set +x 00:48:20.870 13:10:40 -- host/mdns_discovery.sh@64 -- # xargs 00:48:20.870 13:10:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:48:20.870 13:10:40 -- host/mdns_discovery.sh@141 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:48:20.870 13:10:40 -- host/mdns_discovery.sh@142 -- # get_notification_count 00:48:20.870 13:10:40 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:48:20.870 13:10:40 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:48:20.870 13:10:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:48:20.870 13:10:40 -- common/autotest_common.sh@10 -- # set +x 00:48:20.870 13:10:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:48:20.870 13:10:40 -- host/mdns_discovery.sh@87 -- # notification_count=2 00:48:20.870 13:10:40 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:48:20.870 13:10:40 -- host/mdns_discovery.sh@143 -- # [[ 2 == 2 ]] 00:48:20.871 13:10:40 -- host/mdns_discovery.sh@147 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:48:20.871 13:10:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:48:20.871 13:10:40 -- common/autotest_common.sh@10 -- # set +x 00:48:20.871 [2024-07-22 13:10:40.290602] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:48:20.871 [2024-07-22 13:10:40.291018] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:48:20.871 [2024-07-22 13:10:40.291055] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:48:20.871 [2024-07-22 13:10:40.291091] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:48:20.871 [2024-07-22 13:10:40.291106] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:48:21.128 13:10:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:48:21.128 13:10:40 -- host/mdns_discovery.sh@148 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4421 00:48:21.128 13:10:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:48:21.128 13:10:40 -- common/autotest_common.sh@10 -- # set +x 00:48:21.128 [2024-07-22 13:10:40.298492] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:48:21.128 [2024-07-22 13:10:40.299025] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:48:21.128 [2024-07-22 13:10:40.299083] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:48:21.128 13:10:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:48:21.128 13:10:40 -- host/mdns_discovery.sh@149 -- # sleep 1 00:48:21.128 [2024-07-22 13:10:40.430144] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for mdns1_nvme0 00:48:21.128 [2024-07-22 13:10:40.430311] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new path for mdns0_nvme0 00:48:21.128 [2024-07-22 13:10:40.487379] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:48:21.128 [2024-07-22 13:10:40.487403] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:48:21.128 [2024-07-22 13:10:40.487426] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:48:21.129 [2024-07-22 13:10:40.487442] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:48:21.129 [2024-07-22 13:10:40.487483] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:48:21.129 [2024-07-22 13:10:40.487492] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:48:21.129 [2024-07-22 13:10:40.487497] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:48:21.129 [2024-07-22 13:10:40.487510] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:48:21.129 [2024-07-22 13:10:40.533319] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:48:21.129 [2024-07-22 13:10:40.533343] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:48:21.129 [2024-07-22 13:10:40.534308] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:48:21.129 [2024-07-22 13:10:40.534327] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:48:22.062 13:10:41 -- host/mdns_discovery.sh@151 -- # get_subsystem_names 00:48:22.062 13:10:41 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:48:22.062 13:10:41 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:48:22.063 13:10:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:48:22.063 13:10:41 -- common/autotest_common.sh@10 -- # set +x 00:48:22.063 13:10:41 -- host/mdns_discovery.sh@68 -- # sort 00:48:22.063 13:10:41 -- host/mdns_discovery.sh@68 -- # xargs 00:48:22.063 13:10:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:48:22.063 13:10:41 -- host/mdns_discovery.sh@151 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:48:22.063 13:10:41 -- host/mdns_discovery.sh@152 -- # get_bdev_list 00:48:22.063 13:10:41 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:48:22.063 13:10:41 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:48:22.063 13:10:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:48:22.063 13:10:41 -- host/mdns_discovery.sh@64 -- # sort 00:48:22.063 13:10:41 -- common/autotest_common.sh@10 -- # set +x 00:48:22.063 13:10:41 -- host/mdns_discovery.sh@64 -- # xargs 00:48:22.063 13:10:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:48:22.063 13:10:41 -- host/mdns_discovery.sh@152 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:48:22.063 13:10:41 -- host/mdns_discovery.sh@153 -- # get_subsystem_paths mdns0_nvme0 00:48:22.063 13:10:41 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:48:22.063 13:10:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:48:22.063 13:10:41 -- common/autotest_common.sh@10 -- # set +x 00:48:22.063 13:10:41 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:48:22.063 13:10:41 -- host/mdns_discovery.sh@72 -- # sort -n 00:48:22.063 13:10:41 -- host/mdns_discovery.sh@72 -- # xargs 00:48:22.063 13:10:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:48:22.323 13:10:41 -- host/mdns_discovery.sh@153 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:48:22.323 13:10:41 -- host/mdns_discovery.sh@154 -- # get_subsystem_paths mdns1_nvme0 00:48:22.323 13:10:41 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:48:22.323 13:10:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:48:22.323 13:10:41 -- common/autotest_common.sh@10 -- # set +x 00:48:22.323 13:10:41 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:48:22.323 13:10:41 -- host/mdns_discovery.sh@72 -- # sort -n 00:48:22.323 13:10:41 -- host/mdns_discovery.sh@72 -- # xargs 00:48:22.323 13:10:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:48:22.323 13:10:41 -- host/mdns_discovery.sh@154 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:48:22.323 13:10:41 -- host/mdns_discovery.sh@155 -- # get_notification_count 00:48:22.323 13:10:41 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:48:22.323 13:10:41 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:48:22.323 13:10:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:48:22.323 13:10:41 -- common/autotest_common.sh@10 -- # set +x 00:48:22.323 13:10:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:48:22.323 13:10:41 -- host/mdns_discovery.sh@87 -- # notification_count=0 00:48:22.323 13:10:41 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:48:22.323 13:10:41 -- host/mdns_discovery.sh@156 -- # [[ 0 == 0 ]] 00:48:22.323 13:10:41 -- host/mdns_discovery.sh@160 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:48:22.323 13:10:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:48:22.323 13:10:41 -- common/autotest_common.sh@10 -- # set +x 00:48:22.323 [2024-07-22 13:10:41.611944] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:48:22.323 [2024-07-22 13:10:41.611994] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:48:22.323 [2024-07-22 13:10:41.612025] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:48:22.323 [2024-07-22 13:10:41.612038] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:48:22.323 [2024-07-22 13:10:41.614104] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:48:22.323 [2024-07-22 13:10:41.614181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:22.323 [2024-07-22 13:10:41.614212] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:48:22.323 [2024-07-22 13:10:41.614222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:22.323 [2024-07-22 13:10:41.614231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:48:22.323 [2024-07-22 13:10:41.614241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:22.323 [2024-07-22 13:10:41.614250] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:48:22.323 [2024-07-22 13:10:41.614259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:22.323 [2024-07-22 13:10:41.614268] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb61f90 is same with the state(5) to be set 00:48:22.323 13:10:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:48:22.323 13:10:41 -- host/mdns_discovery.sh@161 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:48:22.323 13:10:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:48:22.323 13:10:41 -- common/autotest_common.sh@10 -- # set +x 00:48:22.323 [2024-07-22 13:10:41.619964] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:48:22.323 [2024-07-22 13:10:41.620047] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:48:22.323 [2024-07-22 13:10:41.623072] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:48:22.323 [2024-07-22 13:10:41.623118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:22.323 [2024-07-22 13:10:41.623141] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:48:22.323 [2024-07-22 13:10:41.623185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:22.323 [2024-07-22 13:10:41.623197] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:48:22.323 [2024-07-22 13:10:41.623206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:22.323 [2024-07-22 13:10:41.623215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:48:22.323 [2024-07-22 13:10:41.623224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:22.323 [2024-07-22 13:10:41.623233] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb121d0 is same with the state(5) to be set 00:48:22.323 [2024-07-22 13:10:41.624068] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb61f90 (9): Bad file descriptor 00:48:22.323 13:10:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:48:22.323 13:10:41 -- host/mdns_discovery.sh@162 -- # sleep 1 00:48:22.323 [2024-07-22 13:10:41.633024] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb121d0 (9): Bad file descriptor 00:48:22.323 [2024-07-22 13:10:41.634082] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:48:22.323 [2024-07-22 13:10:41.634275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:48:22.323 [2024-07-22 13:10:41.634325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:48:22.323 [2024-07-22 13:10:41.634342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb61f90 with addr=10.0.0.2, port=4420 00:48:22.323 [2024-07-22 13:10:41.634352] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb61f90 is same with the state(5) to be set 00:48:22.323 [2024-07-22 13:10:41.634369] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb61f90 (9): Bad file descriptor 00:48:22.323 [2024-07-22 13:10:41.634401] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:48:22.323 [2024-07-22 13:10:41.634411] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:48:22.323 [2024-07-22 13:10:41.634422] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:48:22.323 [2024-07-22 13:10:41.634438] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:48:22.323 [2024-07-22 13:10:41.643033] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:48:22.323 [2024-07-22 13:10:41.643167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:48:22.323 [2024-07-22 13:10:41.643224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:48:22.323 [2024-07-22 13:10:41.643240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121d0 with addr=10.0.0.3, port=4420 00:48:22.323 [2024-07-22 13:10:41.643249] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb121d0 is same with the state(5) to be set 00:48:22.323 [2024-07-22 13:10:41.643264] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb121d0 (9): Bad file descriptor 00:48:22.323 [2024-07-22 13:10:41.643278] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:48:22.323 [2024-07-22 13:10:41.643286] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:48:22.323 [2024-07-22 13:10:41.643294] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:48:22.323 [2024-07-22 13:10:41.643307] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:48:22.323 [2024-07-22 13:10:41.644176] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:48:22.323 [2024-07-22 13:10:41.644277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:48:22.323 [2024-07-22 13:10:41.644321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:48:22.323 [2024-07-22 13:10:41.644336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb61f90 with addr=10.0.0.2, port=4420 00:48:22.323 [2024-07-22 13:10:41.644346] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb61f90 is same with the state(5) to be set 00:48:22.324 [2024-07-22 13:10:41.644361] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb61f90 (9): Bad file descriptor 00:48:22.324 [2024-07-22 13:10:41.644375] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:48:22.324 [2024-07-22 13:10:41.644383] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:48:22.324 [2024-07-22 13:10:41.644392] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:48:22.324 [2024-07-22 13:10:41.644406] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:48:22.324 [2024-07-22 13:10:41.653110] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:48:22.324 [2024-07-22 13:10:41.653220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:48:22.324 [2024-07-22 13:10:41.653262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:48:22.324 [2024-07-22 13:10:41.653276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121d0 with addr=10.0.0.3, port=4420 00:48:22.324 [2024-07-22 13:10:41.653285] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb121d0 is same with the state(5) to be set 00:48:22.324 [2024-07-22 13:10:41.653300] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb121d0 (9): Bad file descriptor 00:48:22.324 [2024-07-22 13:10:41.653313] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:48:22.324 [2024-07-22 13:10:41.653321] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:48:22.324 [2024-07-22 13:10:41.653329] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:48:22.324 [2024-07-22 13:10:41.653341] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:48:22.324 [2024-07-22 13:10:41.654249] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:48:22.324 [2024-07-22 13:10:41.654335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:48:22.324 [2024-07-22 13:10:41.654376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:48:22.324 [2024-07-22 13:10:41.654390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb61f90 with addr=10.0.0.2, port=4420 00:48:22.324 [2024-07-22 13:10:41.654399] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb61f90 is same with the state(5) to be set 00:48:22.324 [2024-07-22 13:10:41.654414] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb61f90 (9): Bad file descriptor 00:48:22.324 [2024-07-22 13:10:41.654427] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:48:22.324 [2024-07-22 13:10:41.654434] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:48:22.324 [2024-07-22 13:10:41.654442] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:48:22.324 [2024-07-22 13:10:41.654455] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:48:22.324 [2024-07-22 13:10:41.663201] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:48:22.324 [2024-07-22 13:10:41.663302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:48:22.324 [2024-07-22 13:10:41.663343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:48:22.324 [2024-07-22 13:10:41.663357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121d0 with addr=10.0.0.3, port=4420 00:48:22.324 [2024-07-22 13:10:41.663366] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb121d0 is same with the state(5) to be set 00:48:22.324 [2024-07-22 13:10:41.663380] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb121d0 (9): Bad file descriptor 00:48:22.324 [2024-07-22 13:10:41.663392] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:48:22.324 [2024-07-22 13:10:41.663400] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:48:22.324 [2024-07-22 13:10:41.663408] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:48:22.324 [2024-07-22 13:10:41.663421] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:48:22.324 [2024-07-22 13:10:41.664310] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:48:22.324 [2024-07-22 13:10:41.664408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:48:22.324 [2024-07-22 13:10:41.664464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:48:22.324 [2024-07-22 13:10:41.664478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb61f90 with addr=10.0.0.2, port=4420 00:48:22.324 [2024-07-22 13:10:41.664487] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb61f90 is same with the state(5) to be set 00:48:22.324 [2024-07-22 13:10:41.664501] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb61f90 (9): Bad file descriptor 00:48:22.324 [2024-07-22 13:10:41.664514] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:48:22.324 [2024-07-22 13:10:41.664522] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:48:22.324 [2024-07-22 13:10:41.664530] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:48:22.324 [2024-07-22 13:10:41.664543] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:48:22.324 [2024-07-22 13:10:41.673280] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:48:22.324 [2024-07-22 13:10:41.673378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:48:22.324 [2024-07-22 13:10:41.673422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:48:22.324 [2024-07-22 13:10:41.673437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121d0 with addr=10.0.0.3, port=4420 00:48:22.324 [2024-07-22 13:10:41.673446] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb121d0 is same with the state(5) to be set 00:48:22.324 [2024-07-22 13:10:41.673461] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb121d0 (9): Bad file descriptor 00:48:22.324 [2024-07-22 13:10:41.673475] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:48:22.324 [2024-07-22 13:10:41.673483] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:48:22.324 [2024-07-22 13:10:41.673491] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:48:22.324 [2024-07-22 13:10:41.673505] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:48:22.324 [2024-07-22 13:10:41.674386] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:48:22.324 [2024-07-22 13:10:41.674476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:48:22.324 [2024-07-22 13:10:41.674530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:48:22.324 [2024-07-22 13:10:41.674544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb61f90 with addr=10.0.0.2, port=4420 00:48:22.324 [2024-07-22 13:10:41.674570] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb61f90 is same with the state(5) to be set 00:48:22.324 [2024-07-22 13:10:41.674602] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb61f90 (9): Bad file descriptor 00:48:22.324 [2024-07-22 13:10:41.674626] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:48:22.324 [2024-07-22 13:10:41.674635] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:48:22.324 [2024-07-22 13:10:41.674644] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:48:22.324 [2024-07-22 13:10:41.674661] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:48:22.324 [2024-07-22 13:10:41.683348] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:48:22.324 [2024-07-22 13:10:41.683476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:48:22.324 [2024-07-22 13:10:41.683534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:48:22.324 [2024-07-22 13:10:41.683548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121d0 with addr=10.0.0.3, port=4420 00:48:22.324 [2024-07-22 13:10:41.683557] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb121d0 is same with the state(5) to be set 00:48:22.324 [2024-07-22 13:10:41.683572] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb121d0 (9): Bad file descriptor 00:48:22.324 [2024-07-22 13:10:41.683594] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:48:22.324 [2024-07-22 13:10:41.683604] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:48:22.324 [2024-07-22 13:10:41.683612] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:48:22.324 [2024-07-22 13:10:41.683625] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:48:22.324 [2024-07-22 13:10:41.684464] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:48:22.324 [2024-07-22 13:10:41.684568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:48:22.324 [2024-07-22 13:10:41.684610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:48:22.324 [2024-07-22 13:10:41.684624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb61f90 with addr=10.0.0.2, port=4420 00:48:22.324 [2024-07-22 13:10:41.684649] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb61f90 is same with the state(5) to be set 00:48:22.324 [2024-07-22 13:10:41.684663] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb61f90 (9): Bad file descriptor 00:48:22.324 [2024-07-22 13:10:41.684676] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:48:22.324 [2024-07-22 13:10:41.684684] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:48:22.324 [2024-07-22 13:10:41.684692] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:48:22.324 [2024-07-22 13:10:41.684705] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:48:22.324 [2024-07-22 13:10:41.693429] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:48:22.324 [2024-07-22 13:10:41.693535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:48:22.324 [2024-07-22 13:10:41.693577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:48:22.324 [2024-07-22 13:10:41.693591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121d0 with addr=10.0.0.3, port=4420 00:48:22.324 [2024-07-22 13:10:41.693601] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb121d0 is same with the state(5) to be set 00:48:22.324 [2024-07-22 13:10:41.693615] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb121d0 (9): Bad file descriptor 00:48:22.324 [2024-07-22 13:10:41.693628] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:48:22.325 [2024-07-22 13:10:41.693636] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:48:22.325 [2024-07-22 13:10:41.693644] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:48:22.325 [2024-07-22 13:10:41.693656] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:48:22.325 [2024-07-22 13:10:41.694539] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:48:22.325 [2024-07-22 13:10:41.694663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:48:22.325 [2024-07-22 13:10:41.694706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:48:22.325 [2024-07-22 13:10:41.694721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb61f90 with addr=10.0.0.2, port=4420 00:48:22.325 [2024-07-22 13:10:41.694731] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb61f90 is same with the state(5) to be set 00:48:22.325 [2024-07-22 13:10:41.694747] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb61f90 (9): Bad file descriptor 00:48:22.325 [2024-07-22 13:10:41.694761] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:48:22.325 [2024-07-22 13:10:41.694770] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:48:22.325 [2024-07-22 13:10:41.694779] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:48:22.325 [2024-07-22 13:10:41.694793] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:48:22.325 [2024-07-22 13:10:41.703493] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:48:22.325 [2024-07-22 13:10:41.703613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:48:22.325 [2024-07-22 13:10:41.703654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:48:22.325 [2024-07-22 13:10:41.703668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121d0 with addr=10.0.0.3, port=4420 00:48:22.325 [2024-07-22 13:10:41.703677] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb121d0 is same with the state(5) to be set 00:48:22.325 [2024-07-22 13:10:41.703692] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb121d0 (9): Bad file descriptor 00:48:22.325 [2024-07-22 13:10:41.703714] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:48:22.325 [2024-07-22 13:10:41.703724] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:48:22.325 [2024-07-22 13:10:41.703732] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:48:22.325 [2024-07-22 13:10:41.703745] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:48:22.325 [2024-07-22 13:10:41.704611] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:48:22.325 [2024-07-22 13:10:41.704695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:48:22.325 [2024-07-22 13:10:41.704736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:48:22.325 [2024-07-22 13:10:41.704750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb61f90 with addr=10.0.0.2, port=4420 00:48:22.325 [2024-07-22 13:10:41.704759] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb61f90 is same with the state(5) to be set 00:48:22.325 [2024-07-22 13:10:41.704773] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb61f90 (9): Bad file descriptor 00:48:22.325 [2024-07-22 13:10:41.704786] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:48:22.325 [2024-07-22 13:10:41.704794] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:48:22.325 [2024-07-22 13:10:41.704802] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:48:22.325 [2024-07-22 13:10:41.704815] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:48:22.325 [2024-07-22 13:10:41.713560] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:48:22.325 [2024-07-22 13:10:41.713677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:48:22.325 [2024-07-22 13:10:41.713720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:48:22.325 [2024-07-22 13:10:41.713750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121d0 with addr=10.0.0.3, port=4420 00:48:22.325 [2024-07-22 13:10:41.713760] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb121d0 is same with the state(5) to be set 00:48:22.325 [2024-07-22 13:10:41.713775] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb121d0 (9): Bad file descriptor 00:48:22.325 [2024-07-22 13:10:41.713787] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:48:22.325 [2024-07-22 13:10:41.713795] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:48:22.325 [2024-07-22 13:10:41.713804] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:48:22.325 [2024-07-22 13:10:41.713817] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:48:22.325 [2024-07-22 13:10:41.714673] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:48:22.325 [2024-07-22 13:10:41.714749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:48:22.325 [2024-07-22 13:10:41.714793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:48:22.325 [2024-07-22 13:10:41.714808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb61f90 with addr=10.0.0.2, port=4420 00:48:22.325 [2024-07-22 13:10:41.714817] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb61f90 is same with the state(5) to be set 00:48:22.325 [2024-07-22 13:10:41.714833] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb61f90 (9): Bad file descriptor 00:48:22.325 [2024-07-22 13:10:41.714846] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:48:22.325 [2024-07-22 13:10:41.714855] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:48:22.325 [2024-07-22 13:10:41.714864] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:48:22.325 [2024-07-22 13:10:41.714877] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:48:22.325 [2024-07-22 13:10:41.723644] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:48:22.325 [2024-07-22 13:10:41.723747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:48:22.325 [2024-07-22 13:10:41.723788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:48:22.325 [2024-07-22 13:10:41.723802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121d0 with addr=10.0.0.3, port=4420 00:48:22.325 [2024-07-22 13:10:41.723811] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb121d0 is same with the state(5) to be set 00:48:22.325 [2024-07-22 13:10:41.723834] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb121d0 (9): Bad file descriptor 00:48:22.325 [2024-07-22 13:10:41.723849] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:48:22.325 [2024-07-22 13:10:41.723857] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:48:22.325 [2024-07-22 13:10:41.723866] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:48:22.325 [2024-07-22 13:10:41.723878] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:48:22.325 [2024-07-22 13:10:41.724721] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:48:22.325 [2024-07-22 13:10:41.724806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:48:22.325 [2024-07-22 13:10:41.724846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:48:22.325 [2024-07-22 13:10:41.724860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb61f90 with addr=10.0.0.2, port=4420 00:48:22.325 [2024-07-22 13:10:41.724869] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb61f90 is same with the state(5) to be set 00:48:22.325 [2024-07-22 13:10:41.724883] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb61f90 (9): Bad file descriptor 00:48:22.325 [2024-07-22 13:10:41.724897] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:48:22.325 [2024-07-22 13:10:41.724905] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:48:22.325 [2024-07-22 13:10:41.724913] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:48:22.325 [2024-07-22 13:10:41.724926] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:48:22.325 [2024-07-22 13:10:41.733721] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:48:22.325 [2024-07-22 13:10:41.733827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:48:22.325 [2024-07-22 13:10:41.733869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:48:22.325 [2024-07-22 13:10:41.733884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121d0 with addr=10.0.0.3, port=4420 00:48:22.325 [2024-07-22 13:10:41.733893] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb121d0 is same with the state(5) to be set 00:48:22.325 [2024-07-22 13:10:41.733908] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb121d0 (9): Bad file descriptor 00:48:22.325 [2024-07-22 13:10:41.733937] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:48:22.325 [2024-07-22 13:10:41.733962] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:48:22.325 [2024-07-22 13:10:41.733971] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:48:22.325 [2024-07-22 13:10:41.733985] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:48:22.325 [2024-07-22 13:10:41.734783] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:48:22.325 [2024-07-22 13:10:41.734863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:48:22.325 [2024-07-22 13:10:41.734916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:48:22.325 [2024-07-22 13:10:41.734931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb61f90 with addr=10.0.0.2, port=4420 00:48:22.325 [2024-07-22 13:10:41.734941] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb61f90 is same with the state(5) to be set 00:48:22.325 [2024-07-22 13:10:41.734955] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb61f90 (9): Bad file descriptor 00:48:22.325 [2024-07-22 13:10:41.734969] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:48:22.325 [2024-07-22 13:10:41.734977] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:48:22.326 [2024-07-22 13:10:41.734985] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:48:22.326 [2024-07-22 13:10:41.734998] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:48:22.584 [2024-07-22 13:10:41.743801] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:48:22.584 [2024-07-22 13:10:41.743891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:48:22.584 [2024-07-22 13:10:41.743933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:48:22.584 [2024-07-22 13:10:41.743963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121d0 with addr=10.0.0.3, port=4420 00:48:22.584 [2024-07-22 13:10:41.743973] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb121d0 is same with the state(5) to be set 00:48:22.584 [2024-07-22 13:10:41.743988] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb121d0 (9): Bad file descriptor 00:48:22.584 [2024-07-22 13:10:41.744002] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:48:22.584 [2024-07-22 13:10:41.744011] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:48:22.584 [2024-07-22 13:10:41.744019] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:48:22.584 [2024-07-22 13:10:41.744033] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:48:22.584 [2024-07-22 13:10:41.744833] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:48:22.584 [2024-07-22 13:10:41.744919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:48:22.584 [2024-07-22 13:10:41.744961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:48:22.585 [2024-07-22 13:10:41.744975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb61f90 with addr=10.0.0.2, port=4420 00:48:22.585 [2024-07-22 13:10:41.744985] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb61f90 is same with the state(5) to be set 00:48:22.585 [2024-07-22 13:10:41.744999] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb61f90 (9): Bad file descriptor 00:48:22.585 [2024-07-22 13:10:41.745012] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:48:22.585 [2024-07-22 13:10:41.745021] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:48:22.585 [2024-07-22 13:10:41.745029] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:48:22.585 [2024-07-22 13:10:41.745042] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:48:22.585 [2024-07-22 13:10:41.751117] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:48:22.585 [2024-07-22 13:10:41.751168] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:48:22.585 [2024-07-22 13:10:41.751187] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:48:22.585 [2024-07-22 13:10:41.752137] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 not found 00:48:22.585 [2024-07-22 13:10:41.752188] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:48:22.585 [2024-07-22 13:10:41.752206] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:48:22.585 [2024-07-22 13:10:41.837215] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:48:22.585 [2024-07-22 13:10:41.840220] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:48:23.524 13:10:42 -- host/mdns_discovery.sh@164 -- # get_subsystem_names 00:48:23.524 13:10:42 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:48:23.524 13:10:42 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:48:23.524 13:10:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:48:23.524 13:10:42 -- common/autotest_common.sh@10 -- # set +x 00:48:23.524 13:10:42 -- host/mdns_discovery.sh@68 -- # sort 00:48:23.524 13:10:42 -- host/mdns_discovery.sh@68 -- # xargs 00:48:23.524 13:10:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:48:23.524 13:10:42 -- host/mdns_discovery.sh@164 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:48:23.524 13:10:42 -- host/mdns_discovery.sh@165 -- # get_bdev_list 00:48:23.524 13:10:42 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:48:23.524 13:10:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:48:23.524 13:10:42 -- common/autotest_common.sh@10 -- # set +x 00:48:23.524 13:10:42 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:48:23.524 13:10:42 -- host/mdns_discovery.sh@64 -- # sort 00:48:23.524 13:10:42 -- host/mdns_discovery.sh@64 -- # xargs 00:48:23.525 13:10:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:48:23.525 13:10:42 -- host/mdns_discovery.sh@165 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:48:23.525 13:10:42 -- host/mdns_discovery.sh@166 -- # get_subsystem_paths mdns0_nvme0 00:48:23.525 13:10:42 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:48:23.525 13:10:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:48:23.525 13:10:42 -- common/autotest_common.sh@10 -- # set +x 00:48:23.525 13:10:42 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:48:23.525 13:10:42 -- host/mdns_discovery.sh@72 -- # sort -n 00:48:23.525 13:10:42 -- host/mdns_discovery.sh@72 -- # xargs 00:48:23.525 13:10:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:48:23.525 13:10:42 -- host/mdns_discovery.sh@166 -- # [[ 4421 == \4\4\2\1 ]] 00:48:23.525 13:10:42 -- host/mdns_discovery.sh@167 -- # get_subsystem_paths mdns1_nvme0 00:48:23.525 13:10:42 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:48:23.525 13:10:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:48:23.525 13:10:42 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:48:23.525 13:10:42 -- host/mdns_discovery.sh@72 -- # sort -n 00:48:23.525 13:10:42 -- common/autotest_common.sh@10 -- # set +x 00:48:23.525 13:10:42 -- host/mdns_discovery.sh@72 -- # xargs 00:48:23.525 13:10:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:48:23.525 13:10:42 -- host/mdns_discovery.sh@167 -- # [[ 4421 == \4\4\2\1 ]] 00:48:23.525 13:10:42 -- host/mdns_discovery.sh@168 -- # get_notification_count 00:48:23.525 13:10:42 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:48:23.525 13:10:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:48:23.525 13:10:42 -- common/autotest_common.sh@10 -- # set +x 00:48:23.525 13:10:42 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:48:23.525 13:10:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:48:23.525 13:10:42 -- host/mdns_discovery.sh@87 -- # notification_count=0 00:48:23.525 13:10:42 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:48:23.525 13:10:42 -- host/mdns_discovery.sh@169 -- # [[ 0 == 0 ]] 00:48:23.525 13:10:42 -- host/mdns_discovery.sh@171 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:48:23.525 13:10:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:48:23.525 13:10:42 -- common/autotest_common.sh@10 -- # set +x 00:48:23.525 13:10:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:48:23.525 13:10:42 -- host/mdns_discovery.sh@172 -- # sleep 1 00:48:23.782 [2024-07-22 13:10:43.033404] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:48:24.716 13:10:43 -- host/mdns_discovery.sh@174 -- # get_mdns_discovery_svcs 00:48:24.716 13:10:43 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:48:24.716 13:10:43 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:48:24.716 13:10:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:48:24.716 13:10:43 -- common/autotest_common.sh@10 -- # set +x 00:48:24.716 13:10:43 -- host/mdns_discovery.sh@80 -- # sort 00:48:24.716 13:10:43 -- host/mdns_discovery.sh@80 -- # xargs 00:48:24.716 13:10:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:48:24.716 13:10:44 -- host/mdns_discovery.sh@174 -- # [[ '' == '' ]] 00:48:24.716 13:10:44 -- host/mdns_discovery.sh@175 -- # get_subsystem_names 00:48:24.716 13:10:44 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:48:24.716 13:10:44 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:48:24.716 13:10:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:48:24.716 13:10:44 -- common/autotest_common.sh@10 -- # set +x 00:48:24.716 13:10:44 -- host/mdns_discovery.sh@68 -- # sort 00:48:24.716 13:10:44 -- host/mdns_discovery.sh@68 -- # xargs 00:48:24.716 13:10:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:48:24.716 13:10:44 -- host/mdns_discovery.sh@175 -- # [[ '' == '' ]] 00:48:24.716 13:10:44 -- host/mdns_discovery.sh@176 -- # get_bdev_list 00:48:24.716 13:10:44 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:48:24.716 13:10:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:48:24.716 13:10:44 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:48:24.716 13:10:44 -- common/autotest_common.sh@10 -- # set +x 00:48:24.716 13:10:44 -- host/mdns_discovery.sh@64 -- # sort 00:48:24.716 13:10:44 -- host/mdns_discovery.sh@64 -- # xargs 00:48:24.716 13:10:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:48:24.716 13:10:44 -- host/mdns_discovery.sh@176 -- # [[ '' == '' ]] 00:48:24.716 13:10:44 -- host/mdns_discovery.sh@177 -- # get_notification_count 00:48:24.716 13:10:44 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:48:24.716 13:10:44 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:48:24.716 13:10:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:48:24.716 13:10:44 -- common/autotest_common.sh@10 -- # set +x 00:48:24.985 13:10:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:48:24.985 13:10:44 -- host/mdns_discovery.sh@87 -- # notification_count=4 00:48:24.985 13:10:44 -- host/mdns_discovery.sh@88 -- # notify_id=8 00:48:24.985 13:10:44 -- host/mdns_discovery.sh@178 -- # [[ 4 == 4 ]] 00:48:24.985 13:10:44 -- host/mdns_discovery.sh@181 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:48:24.985 13:10:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:48:24.985 13:10:44 -- common/autotest_common.sh@10 -- # set +x 00:48:24.985 13:10:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:48:24.985 13:10:44 -- host/mdns_discovery.sh@182 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:48:24.985 13:10:44 -- common/autotest_common.sh@640 -- # local es=0 00:48:24.985 13:10:44 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:48:24.985 13:10:44 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:48:24.985 13:10:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:48:24.985 13:10:44 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:48:24.985 13:10:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:48:24.985 13:10:44 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:48:24.985 13:10:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:48:24.985 13:10:44 -- common/autotest_common.sh@10 -- # set +x 00:48:24.985 [2024-07-22 13:10:44.196408] bdev_mdns_client.c: 470:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running with name mdns 00:48:24.985 2024/07/22 13:10:44 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:mdns svcname:_nvme-disc._http], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:48:24.985 request: 00:48:24.985 { 00:48:24.985 "method": "bdev_nvme_start_mdns_discovery", 00:48:24.985 "params": { 00:48:24.985 "name": "mdns", 00:48:24.985 "svcname": "_nvme-disc._http", 00:48:24.985 "hostnqn": "nqn.2021-12.io.spdk:test" 00:48:24.985 } 00:48:24.985 } 00:48:24.985 Got JSON-RPC error response 00:48:24.985 GoRPCClient: error on JSON-RPC call 00:48:24.985 13:10:44 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:48:24.985 13:10:44 -- common/autotest_common.sh@643 -- # es=1 00:48:24.985 13:10:44 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:48:24.985 13:10:44 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:48:24.985 13:10:44 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:48:24.985 13:10:44 -- host/mdns_discovery.sh@183 -- # sleep 5 00:48:25.257 [2024-07-22 13:10:44.585081] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:48:25.516 [2024-07-22 13:10:44.685079] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:48:25.516 [2024-07-22 13:10:44.785091] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:48:25.516 [2024-07-22 13:10:44.785114] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:48:25.516 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:48:25.516 cookie is 0 00:48:25.516 is_local: 1 00:48:25.516 our_own: 0 00:48:25.516 wide_area: 0 00:48:25.516 multicast: 1 00:48:25.516 cached: 1 00:48:25.516 [2024-07-22 13:10:44.885086] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:48:25.516 [2024-07-22 13:10:44.885109] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:48:25.516 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:48:25.516 cookie is 0 00:48:25.516 is_local: 1 00:48:25.516 our_own: 0 00:48:25.516 wide_area: 0 00:48:25.516 multicast: 1 00:48:25.516 cached: 1 00:48:26.452 [2024-07-22 13:10:45.795970] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:48:26.452 [2024-07-22 13:10:45.795993] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:48:26.452 [2024-07-22 13:10:45.796027] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:48:26.711 [2024-07-22 13:10:45.882090] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new subsystem mdns0_nvme0 00:48:26.711 [2024-07-22 13:10:45.895777] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:48:26.711 [2024-07-22 13:10:45.895798] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:48:26.711 [2024-07-22 13:10:45.895830] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:48:26.711 [2024-07-22 13:10:45.949781] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:48:26.711 [2024-07-22 13:10:45.949809] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:48:26.711 [2024-07-22 13:10:45.981910] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem mdns1_nvme0 00:48:26.712 [2024-07-22 13:10:46.040517] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:48:26.712 [2024-07-22 13:10:46.040544] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:48:29.996 13:10:49 -- host/mdns_discovery.sh@185 -- # get_mdns_discovery_svcs 00:48:29.996 13:10:49 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:48:29.996 13:10:49 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:48:29.996 13:10:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:48:29.996 13:10:49 -- host/mdns_discovery.sh@80 -- # sort 00:48:29.996 13:10:49 -- common/autotest_common.sh@10 -- # set +x 00:48:29.996 13:10:49 -- host/mdns_discovery.sh@80 -- # xargs 00:48:29.996 13:10:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:48:29.996 13:10:49 -- host/mdns_discovery.sh@185 -- # [[ mdns == \m\d\n\s ]] 00:48:29.996 13:10:49 -- host/mdns_discovery.sh@186 -- # get_discovery_ctrlrs 00:48:29.996 13:10:49 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:48:29.996 13:10:49 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:48:29.996 13:10:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:48:29.996 13:10:49 -- common/autotest_common.sh@10 -- # set +x 00:48:29.996 13:10:49 -- host/mdns_discovery.sh@76 -- # sort 00:48:29.996 13:10:49 -- host/mdns_discovery.sh@76 -- # xargs 00:48:29.996 13:10:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:48:29.996 13:10:49 -- host/mdns_discovery.sh@186 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:48:29.996 13:10:49 -- host/mdns_discovery.sh@187 -- # get_bdev_list 00:48:29.996 13:10:49 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:48:29.996 13:10:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:48:29.996 13:10:49 -- common/autotest_common.sh@10 -- # set +x 00:48:29.996 13:10:49 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:48:29.996 13:10:49 -- host/mdns_discovery.sh@64 -- # sort 00:48:29.996 13:10:49 -- host/mdns_discovery.sh@64 -- # xargs 00:48:29.997 13:10:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:48:29.997 13:10:49 -- host/mdns_discovery.sh@187 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:48:29.997 13:10:49 -- host/mdns_discovery.sh@190 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:48:29.997 13:10:49 -- common/autotest_common.sh@640 -- # local es=0 00:48:29.997 13:10:49 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:48:29.997 13:10:49 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:48:29.997 13:10:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:48:29.997 13:10:49 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:48:29.997 13:10:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:48:29.997 13:10:49 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:48:29.997 13:10:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:48:29.997 13:10:49 -- common/autotest_common.sh@10 -- # set +x 00:48:29.997 [2024-07-22 13:10:49.392104] bdev_mdns_client.c: 475:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running for service _nvme-disc._tcp 00:48:29.997 2024/07/22 13:10:49 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:cdc svcname:_nvme-disc._tcp], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:48:29.997 request: 00:48:29.997 { 00:48:29.997 "method": "bdev_nvme_start_mdns_discovery", 00:48:29.997 "params": { 00:48:29.997 "name": "cdc", 00:48:29.997 "svcname": "_nvme-disc._tcp", 00:48:29.997 "hostnqn": "nqn.2021-12.io.spdk:test" 00:48:29.997 } 00:48:29.997 } 00:48:29.997 Got JSON-RPC error response 00:48:29.997 GoRPCClient: error on JSON-RPC call 00:48:29.997 13:10:49 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:48:29.997 13:10:49 -- common/autotest_common.sh@643 -- # es=1 00:48:29.997 13:10:49 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:48:29.997 13:10:49 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:48:29.997 13:10:49 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:48:29.997 13:10:49 -- host/mdns_discovery.sh@191 -- # get_discovery_ctrlrs 00:48:29.997 13:10:49 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:48:29.997 13:10:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:48:29.997 13:10:49 -- common/autotest_common.sh@10 -- # set +x 00:48:29.997 13:10:49 -- host/mdns_discovery.sh@76 -- # sort 00:48:29.997 13:10:49 -- host/mdns_discovery.sh@76 -- # xargs 00:48:29.997 13:10:49 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:48:29.997 13:10:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:48:30.255 13:10:49 -- host/mdns_discovery.sh@191 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:48:30.255 13:10:49 -- host/mdns_discovery.sh@192 -- # get_bdev_list 00:48:30.255 13:10:49 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:48:30.255 13:10:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:48:30.255 13:10:49 -- common/autotest_common.sh@10 -- # set +x 00:48:30.255 13:10:49 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:48:30.255 13:10:49 -- host/mdns_discovery.sh@64 -- # sort 00:48:30.255 13:10:49 -- host/mdns_discovery.sh@64 -- # xargs 00:48:30.255 13:10:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:48:30.255 13:10:49 -- host/mdns_discovery.sh@192 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:48:30.255 13:10:49 -- host/mdns_discovery.sh@193 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:48:30.255 13:10:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:48:30.255 13:10:49 -- common/autotest_common.sh@10 -- # set +x 00:48:30.255 13:10:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:48:30.255 13:10:49 -- host/mdns_discovery.sh@195 -- # trap - SIGINT SIGTERM EXIT 00:48:30.255 13:10:49 -- host/mdns_discovery.sh@197 -- # kill 97557 00:48:30.255 13:10:49 -- host/mdns_discovery.sh@200 -- # wait 97557 00:48:30.255 [2024-07-22 13:10:49.623220] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:48:30.514 13:10:49 -- host/mdns_discovery.sh@201 -- # kill 97643 00:48:30.514 Got SIGTERM, quitting. 00:48:30.514 13:10:49 -- host/mdns_discovery.sh@202 -- # kill 97586 00:48:30.514 13:10:49 -- host/mdns_discovery.sh@203 -- # nvmftestfini 00:48:30.514 13:10:49 -- nvmf/common.sh@476 -- # nvmfcleanup 00:48:30.514 Got SIGTERM, quitting. 00:48:30.514 13:10:49 -- nvmf/common.sh@116 -- # sync 00:48:30.514 Leaving mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:48:30.514 Leaving mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:48:30.514 avahi-daemon 0.8 exiting. 00:48:30.514 13:10:49 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:48:30.514 13:10:49 -- nvmf/common.sh@119 -- # set +e 00:48:30.514 13:10:49 -- nvmf/common.sh@120 -- # for i in {1..20} 00:48:30.514 13:10:49 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:48:30.514 rmmod nvme_tcp 00:48:30.514 rmmod nvme_fabrics 00:48:30.515 rmmod nvme_keyring 00:48:30.515 13:10:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:48:30.515 13:10:49 -- nvmf/common.sh@123 -- # set -e 00:48:30.515 13:10:49 -- nvmf/common.sh@124 -- # return 0 00:48:30.515 13:10:49 -- nvmf/common.sh@477 -- # '[' -n 97507 ']' 00:48:30.515 13:10:49 -- nvmf/common.sh@478 -- # killprocess 97507 00:48:30.515 13:10:49 -- common/autotest_common.sh@926 -- # '[' -z 97507 ']' 00:48:30.515 13:10:49 -- common/autotest_common.sh@930 -- # kill -0 97507 00:48:30.515 13:10:49 -- common/autotest_common.sh@931 -- # uname 00:48:30.515 13:10:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:48:30.515 13:10:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 97507 00:48:30.515 13:10:49 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:48:30.515 13:10:49 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:48:30.515 killing process with pid 97507 00:48:30.515 13:10:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 97507' 00:48:30.515 13:10:49 -- common/autotest_common.sh@945 -- # kill 97507 00:48:30.515 13:10:49 -- common/autotest_common.sh@950 -- # wait 97507 00:48:30.773 13:10:50 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:48:30.773 13:10:50 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:48:30.773 13:10:50 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:48:30.773 13:10:50 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:48:30.773 13:10:50 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:48:30.773 13:10:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:48:30.773 13:10:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:48:30.773 13:10:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:48:30.773 13:10:50 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:48:30.773 00:48:30.773 real 0m20.529s 00:48:30.773 user 0m40.248s 00:48:30.773 sys 0m2.000s 00:48:30.773 13:10:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:48:30.773 13:10:50 -- common/autotest_common.sh@10 -- # set +x 00:48:30.773 ************************************ 00:48:30.773 END TEST nvmf_mdns_discovery 00:48:30.773 ************************************ 00:48:30.773 13:10:50 -- nvmf/nvmf.sh@115 -- # [[ 1 -eq 1 ]] 00:48:30.773 13:10:50 -- nvmf/nvmf.sh@116 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:48:30.773 13:10:50 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:48:30.773 13:10:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:48:30.773 13:10:50 -- common/autotest_common.sh@10 -- # set +x 00:48:30.773 ************************************ 00:48:30.773 START TEST nvmf_multipath 00:48:30.773 ************************************ 00:48:30.773 13:10:50 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:48:31.032 * Looking for test storage... 00:48:31.032 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:48:31.032 13:10:50 -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:48:31.032 13:10:50 -- nvmf/common.sh@7 -- # uname -s 00:48:31.032 13:10:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:48:31.032 13:10:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:48:31.032 13:10:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:48:31.032 13:10:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:48:31.032 13:10:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:48:31.032 13:10:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:48:31.032 13:10:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:48:31.032 13:10:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:48:31.032 13:10:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:48:31.032 13:10:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:48:31.032 13:10:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:864ae4a3-cd96-439b-8ef2-6dab4d992115 00:48:31.032 13:10:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=864ae4a3-cd96-439b-8ef2-6dab4d992115 00:48:31.032 13:10:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:48:31.032 13:10:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:48:31.032 13:10:50 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:48:31.032 13:10:50 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:48:31.032 13:10:50 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:48:31.032 13:10:50 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:48:31.032 13:10:50 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:48:31.032 13:10:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:31.032 13:10:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:31.032 13:10:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:31.032 13:10:50 -- paths/export.sh@5 -- # export PATH 00:48:31.032 13:10:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:31.032 13:10:50 -- nvmf/common.sh@46 -- # : 0 00:48:31.032 13:10:50 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:48:31.032 13:10:50 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:48:31.032 13:10:50 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:48:31.032 13:10:50 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:48:31.032 13:10:50 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:48:31.032 13:10:50 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:48:31.032 13:10:50 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:48:31.032 13:10:50 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:48:31.032 13:10:50 -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:48:31.032 13:10:50 -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:48:31.032 13:10:50 -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:48:31.032 13:10:50 -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:48:31.032 13:10:50 -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:48:31.032 13:10:50 -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:48:31.032 13:10:50 -- host/multipath.sh@30 -- # nvmftestinit 00:48:31.032 13:10:50 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:48:31.032 13:10:50 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:48:31.032 13:10:50 -- nvmf/common.sh@436 -- # prepare_net_devs 00:48:31.032 13:10:50 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:48:31.032 13:10:50 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:48:31.032 13:10:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:48:31.032 13:10:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:48:31.032 13:10:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:48:31.032 13:10:50 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:48:31.032 13:10:50 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:48:31.032 13:10:50 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:48:31.032 13:10:50 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:48:31.032 13:10:50 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:48:31.032 13:10:50 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:48:31.032 13:10:50 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:48:31.032 13:10:50 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:48:31.032 13:10:50 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:48:31.032 13:10:50 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:48:31.032 13:10:50 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:48:31.032 13:10:50 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:48:31.032 13:10:50 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:48:31.032 13:10:50 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:48:31.032 13:10:50 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:48:31.032 13:10:50 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:48:31.032 13:10:50 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:48:31.032 13:10:50 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:48:31.032 13:10:50 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:48:31.032 13:10:50 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:48:31.032 Cannot find device "nvmf_tgt_br" 00:48:31.032 13:10:50 -- nvmf/common.sh@154 -- # true 00:48:31.032 13:10:50 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:48:31.032 Cannot find device "nvmf_tgt_br2" 00:48:31.032 13:10:50 -- nvmf/common.sh@155 -- # true 00:48:31.032 13:10:50 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:48:31.032 13:10:50 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:48:31.032 Cannot find device "nvmf_tgt_br" 00:48:31.032 13:10:50 -- nvmf/common.sh@157 -- # true 00:48:31.032 13:10:50 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:48:31.032 Cannot find device "nvmf_tgt_br2" 00:48:31.032 13:10:50 -- nvmf/common.sh@158 -- # true 00:48:31.032 13:10:50 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:48:31.032 13:10:50 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:48:31.032 13:10:50 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:48:31.032 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:48:31.032 13:10:50 -- nvmf/common.sh@161 -- # true 00:48:31.032 13:10:50 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:48:31.032 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:48:31.032 13:10:50 -- nvmf/common.sh@162 -- # true 00:48:31.032 13:10:50 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:48:31.032 13:10:50 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:48:31.032 13:10:50 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:48:31.032 13:10:50 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:48:31.032 13:10:50 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:48:31.032 13:10:50 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:48:31.032 13:10:50 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:48:31.032 13:10:50 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:48:31.291 13:10:50 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:48:31.291 13:10:50 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:48:31.291 13:10:50 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:48:31.291 13:10:50 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:48:31.291 13:10:50 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:48:31.291 13:10:50 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:48:31.291 13:10:50 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:48:31.292 13:10:50 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:48:31.292 13:10:50 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:48:31.292 13:10:50 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:48:31.292 13:10:50 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:48:31.292 13:10:50 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:48:31.292 13:10:50 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:48:31.292 13:10:50 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:48:31.292 13:10:50 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:48:31.292 13:10:50 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:48:31.292 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:48:31.292 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:48:31.292 00:48:31.292 --- 10.0.0.2 ping statistics --- 00:48:31.292 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:48:31.292 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:48:31.292 13:10:50 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:48:31.292 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:48:31.292 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:48:31.292 00:48:31.292 --- 10.0.0.3 ping statistics --- 00:48:31.292 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:48:31.292 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:48:31.292 13:10:50 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:48:31.292 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:48:31.292 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:48:31.292 00:48:31.292 --- 10.0.0.1 ping statistics --- 00:48:31.292 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:48:31.292 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:48:31.292 13:10:50 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:48:31.292 13:10:50 -- nvmf/common.sh@421 -- # return 0 00:48:31.292 13:10:50 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:48:31.292 13:10:50 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:48:31.292 13:10:50 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:48:31.292 13:10:50 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:48:31.292 13:10:50 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:48:31.292 13:10:50 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:48:31.292 13:10:50 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:48:31.292 13:10:50 -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:48:31.292 13:10:50 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:48:31.292 13:10:50 -- common/autotest_common.sh@712 -- # xtrace_disable 00:48:31.292 13:10:50 -- common/autotest_common.sh@10 -- # set +x 00:48:31.292 13:10:50 -- nvmf/common.sh@469 -- # nvmfpid=98153 00:48:31.292 13:10:50 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:48:31.292 13:10:50 -- nvmf/common.sh@470 -- # waitforlisten 98153 00:48:31.292 13:10:50 -- common/autotest_common.sh@819 -- # '[' -z 98153 ']' 00:48:31.292 13:10:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:48:31.292 13:10:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:48:31.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:48:31.292 13:10:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:48:31.292 13:10:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:48:31.292 13:10:50 -- common/autotest_common.sh@10 -- # set +x 00:48:31.292 [2024-07-22 13:10:50.657599] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:48:31.292 [2024-07-22 13:10:50.657692] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:48:31.551 [2024-07-22 13:10:50.791951] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:48:31.551 [2024-07-22 13:10:50.853850] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:48:31.551 [2024-07-22 13:10:50.854015] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:48:31.551 [2024-07-22 13:10:50.854028] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:48:31.551 [2024-07-22 13:10:50.854036] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:48:31.551 [2024-07-22 13:10:50.854500] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:48:31.551 [2024-07-22 13:10:50.854534] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:48:32.486 13:10:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:48:32.486 13:10:51 -- common/autotest_common.sh@852 -- # return 0 00:48:32.486 13:10:51 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:48:32.486 13:10:51 -- common/autotest_common.sh@718 -- # xtrace_disable 00:48:32.486 13:10:51 -- common/autotest_common.sh@10 -- # set +x 00:48:32.486 13:10:51 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:48:32.486 13:10:51 -- host/multipath.sh@33 -- # nvmfapp_pid=98153 00:48:32.486 13:10:51 -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:48:32.744 [2024-07-22 13:10:51.956042] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:48:32.744 13:10:51 -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:48:33.002 Malloc0 00:48:33.002 13:10:52 -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:48:33.261 13:10:52 -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:48:33.261 13:10:52 -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:48:33.519 [2024-07-22 13:10:52.852785] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:48:33.519 13:10:52 -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:48:33.777 [2024-07-22 13:10:53.060863] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:48:33.777 13:10:53 -- host/multipath.sh@44 -- # bdevperf_pid=98258 00:48:33.777 13:10:53 -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:48:33.777 13:10:53 -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:48:33.777 13:10:53 -- host/multipath.sh@47 -- # waitforlisten 98258 /var/tmp/bdevperf.sock 00:48:33.777 13:10:53 -- common/autotest_common.sh@819 -- # '[' -z 98258 ']' 00:48:33.777 13:10:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:48:33.777 13:10:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:48:33.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:48:33.777 13:10:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:48:33.777 13:10:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:48:33.777 13:10:53 -- common/autotest_common.sh@10 -- # set +x 00:48:34.711 13:10:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:48:34.711 13:10:54 -- common/autotest_common.sh@852 -- # return 0 00:48:34.711 13:10:54 -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:48:34.969 13:10:54 -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:48:35.227 Nvme0n1 00:48:35.486 13:10:54 -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:48:35.744 Nvme0n1 00:48:35.744 13:10:55 -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:48:35.744 13:10:55 -- host/multipath.sh@78 -- # sleep 1 00:48:36.678 13:10:56 -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:48:36.678 13:10:56 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:48:36.936 13:10:56 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:48:37.195 13:10:56 -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:48:37.195 13:10:56 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 98153 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:48:37.195 13:10:56 -- host/multipath.sh@65 -- # dtrace_pid=98344 00:48:37.195 13:10:56 -- host/multipath.sh@66 -- # sleep 6 00:48:43.760 13:11:02 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:48:43.760 13:11:02 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:48:43.760 13:11:02 -- host/multipath.sh@67 -- # active_port=4421 00:48:43.760 13:11:02 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:48:43.760 Attaching 4 probes... 00:48:43.760 @path[10.0.0.2, 4421]: 20819 00:48:43.760 @path[10.0.0.2, 4421]: 21333 00:48:43.760 @path[10.0.0.2, 4421]: 21558 00:48:43.760 @path[10.0.0.2, 4421]: 21111 00:48:43.760 @path[10.0.0.2, 4421]: 21303 00:48:43.760 13:11:02 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:48:43.760 13:11:02 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:48:43.760 13:11:02 -- host/multipath.sh@69 -- # sed -n 1p 00:48:43.760 13:11:02 -- host/multipath.sh@69 -- # port=4421 00:48:43.760 13:11:02 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:48:43.760 13:11:02 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:48:43.760 13:11:02 -- host/multipath.sh@72 -- # kill 98344 00:48:43.760 13:11:02 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:48:43.761 13:11:02 -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:48:43.761 13:11:02 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:48:43.761 13:11:02 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:48:44.021 13:11:03 -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:48:44.021 13:11:03 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 98153 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:48:44.021 13:11:03 -- host/multipath.sh@65 -- # dtrace_pid=98477 00:48:44.021 13:11:03 -- host/multipath.sh@66 -- # sleep 6 00:48:50.581 13:11:09 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:48:50.581 13:11:09 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:48:50.581 13:11:09 -- host/multipath.sh@67 -- # active_port=4420 00:48:50.581 13:11:09 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:48:50.581 Attaching 4 probes... 00:48:50.581 @path[10.0.0.2, 4420]: 21151 00:48:50.581 @path[10.0.0.2, 4420]: 21294 00:48:50.581 @path[10.0.0.2, 4420]: 21259 00:48:50.581 @path[10.0.0.2, 4420]: 21381 00:48:50.581 @path[10.0.0.2, 4420]: 21318 00:48:50.581 13:11:09 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:48:50.581 13:11:09 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:48:50.581 13:11:09 -- host/multipath.sh@69 -- # sed -n 1p 00:48:50.581 13:11:09 -- host/multipath.sh@69 -- # port=4420 00:48:50.581 13:11:09 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:48:50.581 13:11:09 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:48:50.581 13:11:09 -- host/multipath.sh@72 -- # kill 98477 00:48:50.581 13:11:09 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:48:50.581 13:11:09 -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:48:50.581 13:11:09 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:48:50.581 13:11:09 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:48:50.581 13:11:09 -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:48:50.581 13:11:09 -- host/multipath.sh@65 -- # dtrace_pid=98607 00:48:50.581 13:11:09 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 98153 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:48:50.581 13:11:09 -- host/multipath.sh@66 -- # sleep 6 00:48:57.199 13:11:15 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:48:57.199 13:11:15 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:48:57.199 13:11:16 -- host/multipath.sh@67 -- # active_port=4421 00:48:57.199 13:11:16 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:48:57.199 Attaching 4 probes... 00:48:57.199 @path[10.0.0.2, 4421]: 16811 00:48:57.199 @path[10.0.0.2, 4421]: 20951 00:48:57.199 @path[10.0.0.2, 4421]: 20840 00:48:57.199 @path[10.0.0.2, 4421]: 21005 00:48:57.199 @path[10.0.0.2, 4421]: 20800 00:48:57.199 13:11:16 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:48:57.199 13:11:16 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:48:57.199 13:11:16 -- host/multipath.sh@69 -- # sed -n 1p 00:48:57.199 13:11:16 -- host/multipath.sh@69 -- # port=4421 00:48:57.199 13:11:16 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:48:57.199 13:11:16 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:48:57.199 13:11:16 -- host/multipath.sh@72 -- # kill 98607 00:48:57.199 13:11:16 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:48:57.199 13:11:16 -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:48:57.199 13:11:16 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:48:57.199 13:11:16 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:48:57.458 13:11:16 -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:48:57.458 13:11:16 -- host/multipath.sh@65 -- # dtrace_pid=98738 00:48:57.458 13:11:16 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 98153 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:48:57.458 13:11:16 -- host/multipath.sh@66 -- # sleep 6 00:49:04.018 13:11:22 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:49:04.018 13:11:22 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:49:04.018 13:11:22 -- host/multipath.sh@67 -- # active_port= 00:49:04.018 13:11:22 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:49:04.018 Attaching 4 probes... 00:49:04.018 00:49:04.018 00:49:04.018 00:49:04.018 00:49:04.018 00:49:04.018 13:11:22 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:49:04.018 13:11:22 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:49:04.018 13:11:22 -- host/multipath.sh@69 -- # sed -n 1p 00:49:04.018 13:11:23 -- host/multipath.sh@69 -- # port= 00:49:04.018 13:11:23 -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:49:04.018 13:11:23 -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:49:04.018 13:11:23 -- host/multipath.sh@72 -- # kill 98738 00:49:04.018 13:11:23 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:49:04.018 13:11:23 -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:49:04.018 13:11:23 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:49:04.018 13:11:23 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:49:04.363 13:11:23 -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:49:04.363 13:11:23 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 98153 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:49:04.363 13:11:23 -- host/multipath.sh@65 -- # dtrace_pid=98874 00:49:04.363 13:11:23 -- host/multipath.sh@66 -- # sleep 6 00:49:10.923 13:11:29 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:49:10.923 13:11:29 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:49:10.923 13:11:29 -- host/multipath.sh@67 -- # active_port=4421 00:49:10.923 13:11:29 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:49:10.923 Attaching 4 probes... 00:49:10.923 @path[10.0.0.2, 4421]: 20104 00:49:10.923 @path[10.0.0.2, 4421]: 20564 00:49:10.923 @path[10.0.0.2, 4421]: 20434 00:49:10.923 @path[10.0.0.2, 4421]: 20622 00:49:10.923 @path[10.0.0.2, 4421]: 20679 00:49:10.923 13:11:29 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:49:10.923 13:11:29 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:49:10.923 13:11:29 -- host/multipath.sh@69 -- # sed -n 1p 00:49:10.923 13:11:29 -- host/multipath.sh@69 -- # port=4421 00:49:10.923 13:11:29 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:49:10.923 13:11:29 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:49:10.923 13:11:29 -- host/multipath.sh@72 -- # kill 98874 00:49:10.923 13:11:29 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:49:10.923 13:11:29 -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:49:10.923 [2024-07-22 13:11:29.956367] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1421c30 is same with the state(5) to be set 00:49:10.923 [2024-07-22 13:11:29.956424] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1421c30 is same with the state(5) to be set 00:49:10.923 [2024-07-22 13:11:29.956437] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1421c30 is same with the state(5) to be set 00:49:10.923 [2024-07-22 13:11:29.956447] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1421c30 is same with the state(5) to be set 00:49:10.923 [2024-07-22 13:11:29.956455] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1421c30 is same with the state(5) to be set 00:49:10.923 [2024-07-22 13:11:29.956464] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1421c30 is same with the state(5) to be set 00:49:10.923 [2024-07-22 13:11:29.956473] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1421c30 is same with the state(5) to be set 00:49:10.923 [2024-07-22 13:11:29.956492] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1421c30 is same with the state(5) to be set 00:49:10.923 [2024-07-22 13:11:29.956500] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1421c30 is same with the state(5) to be set 00:49:10.923 [2024-07-22 13:11:29.956508] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1421c30 is same with the state(5) to be set 00:49:10.923 [2024-07-22 13:11:29.956517] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1421c30 is same with the state(5) to be set 00:49:10.923 [2024-07-22 13:11:29.956539] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1421c30 is same with the state(5) to be set 00:49:10.923 [2024-07-22 13:11:29.956547] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1421c30 is same with the state(5) to be set 00:49:10.923 [2024-07-22 13:11:29.956570] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1421c30 is same with the state(5) to be set 00:49:10.923 [2024-07-22 13:11:29.956578] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1421c30 is same with the state(5) to be set 00:49:10.923 [2024-07-22 13:11:29.956585] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1421c30 is same with the state(5) to be set 00:49:10.923 [2024-07-22 13:11:29.956592] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1421c30 is same with the state(5) to be set 00:49:10.923 [2024-07-22 13:11:29.956599] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1421c30 is same with the state(5) to be set 00:49:10.923 [2024-07-22 13:11:29.956607] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1421c30 is same with the state(5) to be set 00:49:10.923 [2024-07-22 13:11:29.956615] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1421c30 is same with the state(5) to be set 00:49:10.923 [2024-07-22 13:11:29.956623] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1421c30 is same with the state(5) to be set 00:49:10.923 [2024-07-22 13:11:29.956630] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1421c30 is same with the state(5) to be set 00:49:10.923 [2024-07-22 13:11:29.956638] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1421c30 is same with the state(5) to be set 00:49:10.923 [2024-07-22 13:11:29.956645] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1421c30 is same with the state(5) to be set 00:49:10.923 [2024-07-22 13:11:29.956653] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1421c30 is same with the state(5) to be set 00:49:10.923 [2024-07-22 13:11:29.956660] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1421c30 is same with the state(5) to be set 00:49:10.923 [2024-07-22 13:11:29.956668] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1421c30 is same with the state(5) to be set 00:49:10.923 [2024-07-22 13:11:29.956675] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1421c30 is same with the state(5) to be set 00:49:10.923 [2024-07-22 13:11:29.956699] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1421c30 is same with the state(5) to be set 00:49:10.923 [2024-07-22 13:11:29.956706] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1421c30 is same with the state(5) to be set 00:49:10.923 [2024-07-22 13:11:29.956713] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1421c30 is same with the state(5) to be set 00:49:10.923 [2024-07-22 13:11:29.956721] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1421c30 is same with the state(5) to be set 00:49:10.923 [2024-07-22 13:11:29.956746] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1421c30 is same with the state(5) to be set 00:49:10.923 [2024-07-22 13:11:29.956754] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1421c30 is same with the state(5) to be set 00:49:10.923 [2024-07-22 13:11:29.956761] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1421c30 is same with the state(5) to be set 00:49:10.923 [2024-07-22 13:11:29.956770] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1421c30 is same with the state(5) to be set 00:49:10.923 [2024-07-22 13:11:29.956778] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1421c30 is same with the state(5) to be set 00:49:10.923 [2024-07-22 13:11:29.956797] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1421c30 is same with the state(5) to be set 00:49:10.924 [2024-07-22 13:11:29.956805] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1421c30 is same with the state(5) to be set 00:49:10.924 [2024-07-22 13:11:29.956813] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1421c30 is same with the state(5) to be set 00:49:10.924 [2024-07-22 13:11:29.956821] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1421c30 is same with the state(5) to be set 00:49:10.924 [2024-07-22 13:11:29.956828] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1421c30 is same with the state(5) to be set 00:49:10.924 [2024-07-22 13:11:29.956837] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1421c30 is same with the state(5) to be set 00:49:10.924 [2024-07-22 13:11:29.956845] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1421c30 is same with the state(5) to be set 00:49:10.924 [2024-07-22 13:11:29.956853] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1421c30 is same with the state(5) to be set 00:49:10.924 [2024-07-22 13:11:29.956862] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1421c30 is same with the state(5) to be set 00:49:10.924 [2024-07-22 13:11:29.956869] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1421c30 is same with the state(5) to be set 00:49:10.924 [2024-07-22 13:11:29.956877] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1421c30 is same with the state(5) to be set 00:49:10.924 [2024-07-22 13:11:29.956885] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1421c30 is same with the state(5) to be set 00:49:10.924 [2024-07-22 13:11:29.956893] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1421c30 is same with the state(5) to be set 00:49:10.924 [2024-07-22 13:11:29.956901] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1421c30 is same with the state(5) to be set 00:49:10.924 [2024-07-22 13:11:29.956909] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1421c30 is same with the state(5) to be set 00:49:10.924 [2024-07-22 13:11:29.956918] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1421c30 is same with the state(5) to be set 00:49:10.924 [2024-07-22 13:11:29.956927] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1421c30 is same with the state(5) to be set 00:49:10.924 [2024-07-22 13:11:29.956935] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1421c30 is same with the state(5) to be set 00:49:10.924 [2024-07-22 13:11:29.956943] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1421c30 is same with the state(5) to be set 00:49:10.924 [2024-07-22 13:11:29.956951] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1421c30 is same with the state(5) to be set 00:49:10.924 [2024-07-22 13:11:29.956960] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1421c30 is same with the state(5) to be set 00:49:10.924 [2024-07-22 13:11:29.956968] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1421c30 is same with the state(5) to be set 00:49:10.924 [2024-07-22 13:11:29.956976] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1421c30 is same with the state(5) to be set 00:49:10.924 [2024-07-22 13:11:29.956984] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1421c30 is same with the state(5) to be set 00:49:10.924 [2024-07-22 13:11:29.956992] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1421c30 is same with the state(5) to be set 00:49:10.924 [2024-07-22 13:11:29.957000] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1421c30 is same with the state(5) to be set 00:49:10.924 [2024-07-22 13:11:29.957008] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1421c30 is same with the state(5) to be set 00:49:10.924 [2024-07-22 13:11:29.957016] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1421c30 is same with the state(5) to be set 00:49:10.924 13:11:29 -- host/multipath.sh@101 -- # sleep 1 00:49:11.858 13:11:30 -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:49:11.858 13:11:30 -- host/multipath.sh@65 -- # dtrace_pid=99004 00:49:11.858 13:11:30 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 98153 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:49:11.858 13:11:30 -- host/multipath.sh@66 -- # sleep 6 00:49:18.436 13:11:36 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:49:18.436 13:11:36 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:49:18.436 13:11:37 -- host/multipath.sh@67 -- # active_port=4420 00:49:18.436 13:11:37 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:49:18.436 Attaching 4 probes... 00:49:18.436 @path[10.0.0.2, 4420]: 19969 00:49:18.436 @path[10.0.0.2, 4420]: 20829 00:49:18.436 @path[10.0.0.2, 4420]: 20931 00:49:18.436 @path[10.0.0.2, 4420]: 20817 00:49:18.436 @path[10.0.0.2, 4420]: 21093 00:49:18.436 13:11:37 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:49:18.436 13:11:37 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:49:18.436 13:11:37 -- host/multipath.sh@69 -- # sed -n 1p 00:49:18.436 13:11:37 -- host/multipath.sh@69 -- # port=4420 00:49:18.436 13:11:37 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:49:18.436 13:11:37 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:49:18.436 13:11:37 -- host/multipath.sh@72 -- # kill 99004 00:49:18.436 13:11:37 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:49:18.436 13:11:37 -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:49:18.436 [2024-07-22 13:11:37.457277] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:49:18.436 13:11:37 -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:49:18.436 13:11:37 -- host/multipath.sh@111 -- # sleep 6 00:49:24.995 13:11:43 -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:49:24.995 13:11:43 -- host/multipath.sh@65 -- # dtrace_pid=99195 00:49:24.995 13:11:43 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 98153 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:49:24.995 13:11:43 -- host/multipath.sh@66 -- # sleep 6 00:49:31.564 13:11:49 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:49:31.564 13:11:49 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:49:31.564 13:11:50 -- host/multipath.sh@67 -- # active_port=4421 00:49:31.564 13:11:50 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:49:31.564 Attaching 4 probes... 00:49:31.564 @path[10.0.0.2, 4421]: 19869 00:49:31.564 @path[10.0.0.2, 4421]: 20327 00:49:31.564 @path[10.0.0.2, 4421]: 20414 00:49:31.564 @path[10.0.0.2, 4421]: 19850 00:49:31.564 @path[10.0.0.2, 4421]: 19918 00:49:31.564 13:11:50 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:49:31.564 13:11:50 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:49:31.564 13:11:50 -- host/multipath.sh@69 -- # sed -n 1p 00:49:31.564 13:11:50 -- host/multipath.sh@69 -- # port=4421 00:49:31.564 13:11:50 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:49:31.564 13:11:50 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:49:31.564 13:11:50 -- host/multipath.sh@72 -- # kill 99195 00:49:31.564 13:11:50 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:49:31.564 13:11:50 -- host/multipath.sh@114 -- # killprocess 98258 00:49:31.564 13:11:50 -- common/autotest_common.sh@926 -- # '[' -z 98258 ']' 00:49:31.564 13:11:50 -- common/autotest_common.sh@930 -- # kill -0 98258 00:49:31.564 13:11:50 -- common/autotest_common.sh@931 -- # uname 00:49:31.564 13:11:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:49:31.564 13:11:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 98258 00:49:31.564 13:11:50 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:49:31.564 13:11:50 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:49:31.564 killing process with pid 98258 00:49:31.564 13:11:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 98258' 00:49:31.564 13:11:50 -- common/autotest_common.sh@945 -- # kill 98258 00:49:31.564 13:11:50 -- common/autotest_common.sh@950 -- # wait 98258 00:49:31.564 Connection closed with partial response: 00:49:31.564 00:49:31.564 00:49:31.564 13:11:50 -- host/multipath.sh@116 -- # wait 98258 00:49:31.564 13:11:50 -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:49:31.564 [2024-07-22 13:10:53.133920] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:49:31.564 [2024-07-22 13:10:53.134037] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98258 ] 00:49:31.564 [2024-07-22 13:10:53.272511] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:49:31.564 [2024-07-22 13:10:53.341091] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:49:31.564 Running I/O for 90 seconds... 00:49:31.564 [2024-07-22 13:11:03.184564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:2184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.564 [2024-07-22 13:11:03.184631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:49:31.564 [2024-07-22 13:11:03.184697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.564 [2024-07-22 13:11:03.184716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:49:31.564 [2024-07-22 13:11:03.184736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:2200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.564 [2024-07-22 13:11:03.184749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:49:31.564 [2024-07-22 13:11:03.184768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:1504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.564 [2024-07-22 13:11:03.184781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:49:31.564 [2024-07-22 13:11:03.184801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.564 [2024-07-22 13:11:03.184814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:49:31.564 [2024-07-22 13:11:03.184836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:1560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.564 [2024-07-22 13:11:03.184848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:49:31.564 [2024-07-22 13:11:03.184866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:1568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.564 [2024-07-22 13:11:03.184879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:49:31.564 [2024-07-22 13:11:03.184903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:1576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.564 [2024-07-22 13:11:03.184922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:49:31.564 [2024-07-22 13:11:03.184940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:1592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.564 [2024-07-22 13:11:03.184953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:49:31.564 [2024-07-22 13:11:03.184971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:1640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.564 [2024-07-22 13:11:03.184983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:49:31.564 [2024-07-22 13:11:03.185002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:1696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.564 [2024-07-22 13:11:03.185043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:49:31.564 [2024-07-22 13:11:03.185065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:1704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.564 [2024-07-22 13:11:03.185094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:49:31.564 [2024-07-22 13:11:03.185113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:1744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.564 [2024-07-22 13:11:03.185127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:49:31.564 [2024-07-22 13:11:03.185163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.564 [2024-07-22 13:11:03.185200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:49:31.564 [2024-07-22 13:11:03.185232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:1776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.564 [2024-07-22 13:11:03.185259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:49:31.564 [2024-07-22 13:11:03.185280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:1784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.564 [2024-07-22 13:11:03.185294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:49:31.564 [2024-07-22 13:11:03.185314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:1792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.564 [2024-07-22 13:11:03.185328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:49:31.564 [2024-07-22 13:11:03.185350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:1816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.564 [2024-07-22 13:11:03.185364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:49:31.564 [2024-07-22 13:11:03.185384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:1824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.564 [2024-07-22 13:11:03.185398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:49:31.564 [2024-07-22 13:11:03.185418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.564 [2024-07-22 13:11:03.185432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:49:31.564 [2024-07-22 13:11:03.185453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:2216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.564 [2024-07-22 13:11:03.185471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:49:31.564 [2024-07-22 13:11:03.185905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:2224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.564 [2024-07-22 13:11:03.185931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:49:31.564 [2024-07-22 13:11:03.185956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:2232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.564 [2024-07-22 13:11:03.185972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:49:31.564 [2024-07-22 13:11:03.186040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:2240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.564 [2024-07-22 13:11:03.186056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:49:31.564 [2024-07-22 13:11:03.186092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:2248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.564 [2024-07-22 13:11:03.186106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:49:31.564 [2024-07-22 13:11:03.186127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:2256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.564 [2024-07-22 13:11:03.186141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:49:31.564 [2024-07-22 13:11:03.186162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:2264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.564 [2024-07-22 13:11:03.186176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:49:31.564 [2024-07-22 13:11:03.186771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:2272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.565 [2024-07-22 13:11:03.186796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:49:31.565 [2024-07-22 13:11:03.186820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:2280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.565 [2024-07-22 13:11:03.186836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:49:31.565 [2024-07-22 13:11:03.186858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:2288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.565 [2024-07-22 13:11:03.186873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:49:31.565 [2024-07-22 13:11:03.186894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:2296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.565 [2024-07-22 13:11:03.186908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:49:31.565 [2024-07-22 13:11:03.186930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:2304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.565 [2024-07-22 13:11:03.186944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:49:31.565 [2024-07-22 13:11:03.186980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:2312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.565 [2024-07-22 13:11:03.186994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:49:31.565 [2024-07-22 13:11:03.187014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:2320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.565 [2024-07-22 13:11:03.187028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:49:31.565 [2024-07-22 13:11:03.187063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:2328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.565 [2024-07-22 13:11:03.187077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:49:31.565 [2024-07-22 13:11:03.187107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:2336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.565 [2024-07-22 13:11:03.187123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:49:31.565 [2024-07-22 13:11:03.187143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:2344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.565 [2024-07-22 13:11:03.187157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:49:31.565 [2024-07-22 13:11:03.187176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:2352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.565 [2024-07-22 13:11:03.187205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:49:31.565 [2024-07-22 13:11:03.187226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.565 [2024-07-22 13:11:03.187240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:49:31.565 [2024-07-22 13:11:03.187271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:2368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.565 [2024-07-22 13:11:03.187284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:49:31.565 [2024-07-22 13:11:03.187304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:2376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.565 [2024-07-22 13:11:03.187318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:49:31.565 [2024-07-22 13:11:03.187338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:2384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.565 [2024-07-22 13:11:03.187351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:49:31.565 [2024-07-22 13:11:03.187371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:2392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.565 [2024-07-22 13:11:03.187385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:49:31.565 [2024-07-22 13:11:03.187405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:2400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.565 [2024-07-22 13:11:03.187419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:49:31.565 [2024-07-22 13:11:03.187438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:2408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.565 [2024-07-22 13:11:03.187452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:49:31.565 [2024-07-22 13:11:03.187472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:2416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.565 [2024-07-22 13:11:03.187485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:49:31.565 [2024-07-22 13:11:03.187505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:2424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.565 [2024-07-22 13:11:03.187518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:49:31.565 [2024-07-22 13:11:03.187538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:2432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.565 [2024-07-22 13:11:03.187559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:49:31.565 [2024-07-22 13:11:03.187581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:2440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.565 [2024-07-22 13:11:03.187595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.565 [2024-07-22 13:11:03.187614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.565 [2024-07-22 13:11:03.187628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:31.565 [2024-07-22 13:11:03.187647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.565 [2024-07-22 13:11:03.187661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:49:31.565 [2024-07-22 13:11:03.187683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:2464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.565 [2024-07-22 13:11:03.187697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:49:31.565 [2024-07-22 13:11:03.187716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:2472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.565 [2024-07-22 13:11:03.187730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:49:31.565 [2024-07-22 13:11:03.187750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:2480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.565 [2024-07-22 13:11:03.187764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:49:31.565 [2024-07-22 13:11:03.187783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:2488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.565 [2024-07-22 13:11:03.187797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:49:31.565 [2024-07-22 13:11:03.188375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:2496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.565 [2024-07-22 13:11:03.188401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:49:31.565 [2024-07-22 13:11:03.188426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:2504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.565 [2024-07-22 13:11:03.188441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:49:31.565 [2024-07-22 13:11:03.188462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:2512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.565 [2024-07-22 13:11:03.188476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:49:31.565 [2024-07-22 13:11:03.188511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.565 [2024-07-22 13:11:03.188525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:49:31.565 [2024-07-22 13:11:03.188563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:2528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.565 [2024-07-22 13:11:03.188578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:49:31.565 [2024-07-22 13:11:03.188611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:2536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.565 [2024-07-22 13:11:03.188628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:49:31.565 [2024-07-22 13:11:03.188649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:2544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.565 [2024-07-22 13:11:03.188663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:49:31.565 [2024-07-22 13:11:03.188685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:2552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.565 [2024-07-22 13:11:03.188699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:49:31.565 [2024-07-22 13:11:03.188720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:2560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.565 [2024-07-22 13:11:03.188735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:49:31.565 [2024-07-22 13:11:03.188756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:2568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.565 [2024-07-22 13:11:03.188770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:49:31.566 [2024-07-22 13:11:03.188791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:2576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.566 [2024-07-22 13:11:03.188806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:49:31.566 [2024-07-22 13:11:03.188827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:2584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.566 [2024-07-22 13:11:03.188849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:49:31.566 [2024-07-22 13:11:03.188870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:2592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.566 [2024-07-22 13:11:03.188885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:49:31.566 [2024-07-22 13:11:03.188906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:1832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.566 [2024-07-22 13:11:03.188923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:49:31.566 [2024-07-22 13:11:03.188944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:1840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.566 [2024-07-22 13:11:03.188958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:49:31.566 [2024-07-22 13:11:03.188979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.566 [2024-07-22 13:11:03.188993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:49:31.566 [2024-07-22 13:11:03.189014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.566 [2024-07-22 13:11:03.189028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:49:31.566 [2024-07-22 13:11:03.189057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:1888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.566 [2024-07-22 13:11:03.189073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:49:31.566 [2024-07-22 13:11:03.189094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.566 [2024-07-22 13:11:03.189108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:49:31.566 [2024-07-22 13:11:03.189129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:1912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.566 [2024-07-22 13:11:03.189144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:49:31.566 [2024-07-22 13:11:03.189179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:1936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.566 [2024-07-22 13:11:03.189197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:49:31.566 [2024-07-22 13:11:03.189219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:1944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.566 [2024-07-22 13:11:03.189234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:49:31.566 [2024-07-22 13:11:03.189265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:1952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.566 [2024-07-22 13:11:03.189279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:49:31.566 [2024-07-22 13:11:03.189312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.566 [2024-07-22 13:11:03.189329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:49:31.566 [2024-07-22 13:11:03.189350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:1984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.566 [2024-07-22 13:11:03.189366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:49:31.566 [2024-07-22 13:11:03.189386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:2008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.566 [2024-07-22 13:11:03.189401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:49:31.566 [2024-07-22 13:11:03.189422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:2024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.566 [2024-07-22 13:11:03.189436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:49:31.566 [2024-07-22 13:11:03.189457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.566 [2024-07-22 13:11:03.189472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:49:31.566 [2024-07-22 13:11:03.189494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:2040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.566 [2024-07-22 13:11:03.189508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:49:31.566 [2024-07-22 13:11:03.189528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.566 [2024-07-22 13:11:03.189550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:49:31.566 [2024-07-22 13:11:03.189573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.566 [2024-07-22 13:11:03.189588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:49:31.566 [2024-07-22 13:11:03.189609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:2616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.566 [2024-07-22 13:11:03.189623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:49:31.566 [2024-07-22 13:11:03.189644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:2624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.566 [2024-07-22 13:11:03.189658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:49:31.566 [2024-07-22 13:11:03.189679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:2632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.566 [2024-07-22 13:11:03.189693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:49:31.566 [2024-07-22 13:11:03.189714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:2640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.566 [2024-07-22 13:11:03.189728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:49:31.566 [2024-07-22 13:11:03.189749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:2048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.566 [2024-07-22 13:11:03.189763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:49:31.566 [2024-07-22 13:11:03.189785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:2056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.566 [2024-07-22 13:11:03.189799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:49:31.566 [2024-07-22 13:11:03.189820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:2104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.566 [2024-07-22 13:11:03.189834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:49:31.566 [2024-07-22 13:11:03.189855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:2136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.566 [2024-07-22 13:11:03.189870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:49:31.566 [2024-07-22 13:11:03.189902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.566 [2024-07-22 13:11:03.189921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:49:31.566 [2024-07-22 13:11:03.189941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:2152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.566 [2024-07-22 13:11:03.189971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:49:31.566 [2024-07-22 13:11:03.190005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:2160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.566 [2024-07-22 13:11:03.190025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:49:31.566 [2024-07-22 13:11:03.190047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:2176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.566 [2024-07-22 13:11:03.190061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:49:31.566 [2024-07-22 13:11:03.190081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.566 [2024-07-22 13:11:03.190094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:49:31.566 [2024-07-22 13:11:03.190114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:2656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.566 [2024-07-22 13:11:03.190127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:49:31.566 [2024-07-22 13:11:03.190147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:2664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.566 [2024-07-22 13:11:03.190160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:49:31.566 [2024-07-22 13:11:03.190193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:2672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.566 [2024-07-22 13:11:03.190207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:49:31.566 [2024-07-22 13:11:03.190227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:2680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.566 [2024-07-22 13:11:03.190241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:49:31.566 [2024-07-22 13:11:03.190269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:2688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.567 [2024-07-22 13:11:03.190286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:49:31.567 [2024-07-22 13:11:03.190319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.567 [2024-07-22 13:11:03.190333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:49:31.567 [2024-07-22 13:11:03.190359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:2704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.567 [2024-07-22 13:11:03.190374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:49:31.567 [2024-07-22 13:11:03.190393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:2712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.567 [2024-07-22 13:11:03.190407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:49:31.567 [2024-07-22 13:11:03.190427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:2720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.567 [2024-07-22 13:11:03.190440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:49:31.567 [2024-07-22 13:11:03.190460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:2728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.567 [2024-07-22 13:11:03.190474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:49:31.567 [2024-07-22 13:11:03.190501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:2736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.567 [2024-07-22 13:11:03.190516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:49:31.567 [2024-07-22 13:11:03.190536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:2744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.567 [2024-07-22 13:11:03.190550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:49:31.567 [2024-07-22 13:11:09.663866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:78328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.567 [2024-07-22 13:11:09.663928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:49:31.567 [2024-07-22 13:11:09.663996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:77672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.567 [2024-07-22 13:11:09.664015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:49:31.567 [2024-07-22 13:11:09.664035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:77696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.567 [2024-07-22 13:11:09.664049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:49:31.567 [2024-07-22 13:11:09.664068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:77704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.567 [2024-07-22 13:11:09.664081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:49:31.567 [2024-07-22 13:11:09.664100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:77728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.567 [2024-07-22 13:11:09.664112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:49:31.567 [2024-07-22 13:11:09.664140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:77760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.567 [2024-07-22 13:11:09.664184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:49:31.567 [2024-07-22 13:11:09.664209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:77776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.567 [2024-07-22 13:11:09.664224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:49:31.567 [2024-07-22 13:11:09.664245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:77800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.567 [2024-07-22 13:11:09.664259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:49:31.567 [2024-07-22 13:11:09.664280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:77824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.567 [2024-07-22 13:11:09.664294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:49:31.567 [2024-07-22 13:11:09.664315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:77832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.567 [2024-07-22 13:11:09.664330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:49:31.567 [2024-07-22 13:11:09.664373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:77840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.567 [2024-07-22 13:11:09.664389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:49:31.567 [2024-07-22 13:11:09.664410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:77864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.567 [2024-07-22 13:11:09.664425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:49:31.567 [2024-07-22 13:11:09.664445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:77872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.567 [2024-07-22 13:11:09.664459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:49:31.567 [2024-07-22 13:11:09.664480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:77880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.567 [2024-07-22 13:11:09.664494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:49:31.567 [2024-07-22 13:11:09.664529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:77888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.567 [2024-07-22 13:11:09.664557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:49:31.567 [2024-07-22 13:11:09.664575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:77912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.567 [2024-07-22 13:11:09.664588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:49:31.567 [2024-07-22 13:11:09.664608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:77952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.567 [2024-07-22 13:11:09.664621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:49:31.567 [2024-07-22 13:11:09.664717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:78336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.567 [2024-07-22 13:11:09.664740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:49:31.567 [2024-07-22 13:11:09.664764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:78344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.567 [2024-07-22 13:11:09.664779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:49:31.567 [2024-07-22 13:11:09.664800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:78352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.567 [2024-07-22 13:11:09.664813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:49:31.567 [2024-07-22 13:11:09.664833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:78360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.567 [2024-07-22 13:11:09.664851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:49:31.567 [2024-07-22 13:11:09.664871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:78368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.567 [2024-07-22 13:11:09.664884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:49:31.567 [2024-07-22 13:11:09.664903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:78376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.567 [2024-07-22 13:11:09.664933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:49:31.567 [2024-07-22 13:11:09.664955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:78384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.567 [2024-07-22 13:11:09.664968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:49:31.567 [2024-07-22 13:11:09.664988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:78392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.567 [2024-07-22 13:11:09.665001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:49:31.567 [2024-07-22 13:11:09.665022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:78400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.567 [2024-07-22 13:11:09.665035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:49:31.567 [2024-07-22 13:11:09.665054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:78408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.567 [2024-07-22 13:11:09.665067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:49:31.567 [2024-07-22 13:11:09.665088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:78416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.567 [2024-07-22 13:11:09.665101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:49:31.567 [2024-07-22 13:11:09.665121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:78424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.567 [2024-07-22 13:11:09.665134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:49:31.567 [2024-07-22 13:11:09.665188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:78432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.567 [2024-07-22 13:11:09.665218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:49:31.567 [2024-07-22 13:11:09.665243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:78440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.568 [2024-07-22 13:11:09.665258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:49:31.568 [2024-07-22 13:11:09.665280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:78448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.568 [2024-07-22 13:11:09.665294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:49:31.568 [2024-07-22 13:11:09.665316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:78456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.568 [2024-07-22 13:11:09.665331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:49:31.568 [2024-07-22 13:11:09.665354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:78464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.568 [2024-07-22 13:11:09.665368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:49:31.568 [2024-07-22 13:11:09.665390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:78472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.568 [2024-07-22 13:11:09.665412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:49:31.568 [2024-07-22 13:11:09.665437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:78480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.568 [2024-07-22 13:11:09.665452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:49:31.568 [2024-07-22 13:11:09.665474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.568 [2024-07-22 13:11:09.665488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:49:31.568 [2024-07-22 13:11:09.665510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:78496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.568 [2024-07-22 13:11:09.665524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:49:31.568 [2024-07-22 13:11:09.665824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:78504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.568 [2024-07-22 13:11:09.665843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:49:31.568 [2024-07-22 13:11:09.665865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:78512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.568 [2024-07-22 13:11:09.665878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:49:31.568 [2024-07-22 13:11:09.665898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:78520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.568 [2024-07-22 13:11:09.665912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:49:31.568 [2024-07-22 13:11:09.665932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:78528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.568 [2024-07-22 13:11:09.665945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:49:31.568 [2024-07-22 13:11:09.665965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:78536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.568 [2024-07-22 13:11:09.665978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:49:31.568 [2024-07-22 13:11:09.665999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.568 [2024-07-22 13:11:09.666012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:49:31.568 [2024-07-22 13:11:09.666032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:78552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.568 [2024-07-22 13:11:09.666045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:49:31.568 [2024-07-22 13:11:09.666065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:78560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.568 [2024-07-22 13:11:09.666078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:49:31.568 [2024-07-22 13:11:09.666099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:78568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.568 [2024-07-22 13:11:09.666112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:49:31.568 [2024-07-22 13:11:09.666140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.568 [2024-07-22 13:11:09.666186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:49:31.568 [2024-07-22 13:11:09.666235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:78584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.568 [2024-07-22 13:11:09.666253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:49:31.568 [2024-07-22 13:11:09.666276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:78592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.568 [2024-07-22 13:11:09.666291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:49:31.568 [2024-07-22 13:11:09.666313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:77968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.568 [2024-07-22 13:11:09.666327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:49:31.568 [2024-07-22 13:11:09.666350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:77976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.568 [2024-07-22 13:11:09.666364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:49:31.568 [2024-07-22 13:11:09.666387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:77984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.568 [2024-07-22 13:11:09.666411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:49:31.568 [2024-07-22 13:11:09.666434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:78000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.568 [2024-07-22 13:11:09.666448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:49:31.568 [2024-07-22 13:11:09.666470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:78008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.568 [2024-07-22 13:11:09.666484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:49:31.568 [2024-07-22 13:11:09.666518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:78024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.568 [2024-07-22 13:11:09.666531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:49:31.568 [2024-07-22 13:11:09.666589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:78032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.568 [2024-07-22 13:11:09.666602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:49:31.568 [2024-07-22 13:11:09.666651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:78040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.568 [2024-07-22 13:11:09.666668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:49:31.568 [2024-07-22 13:11:09.666824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:78600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.568 [2024-07-22 13:11:09.666849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:49:31.568 [2024-07-22 13:11:09.666889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:78608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.568 [2024-07-22 13:11:09.666907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:49:31.568 [2024-07-22 13:11:09.666949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:78616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.568 [2024-07-22 13:11:09.666980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:49:31.568 [2024-07-22 13:11:09.667005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:78624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.568 [2024-07-22 13:11:09.667018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:49:31.569 [2024-07-22 13:11:09.667041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:78632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.569 [2024-07-22 13:11:09.667055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:49:31.569 [2024-07-22 13:11:09.667092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:78640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.569 [2024-07-22 13:11:09.667105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:49:31.569 [2024-07-22 13:11:09.667128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:78648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.569 [2024-07-22 13:11:09.667141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:49:31.569 [2024-07-22 13:11:09.667196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:78656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.569 [2024-07-22 13:11:09.667210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:49:31.569 [2024-07-22 13:11:09.667247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:78664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.569 [2024-07-22 13:11:09.667264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.569 [2024-07-22 13:11:09.667290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:78672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.569 [2024-07-22 13:11:09.667305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:31.569 [2024-07-22 13:11:09.667330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.569 [2024-07-22 13:11:09.667344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:49:31.569 [2024-07-22 13:11:09.667368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:78688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.569 [2024-07-22 13:11:09.667382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:49:31.569 [2024-07-22 13:11:09.667406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:78696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.569 [2024-07-22 13:11:09.667420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:49:31.569 [2024-07-22 13:11:09.667455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:78704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.569 [2024-07-22 13:11:09.667486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:49:31.569 [2024-07-22 13:11:09.667510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:78712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.569 [2024-07-22 13:11:09.667539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:49:31.569 [2024-07-22 13:11:09.667562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.569 [2024-07-22 13:11:09.667576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:49:31.569 [2024-07-22 13:11:09.667599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:78728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.569 [2024-07-22 13:11:09.667612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:49:31.569 [2024-07-22 13:11:09.667634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:78736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.569 [2024-07-22 13:11:09.667647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:49:31.569 [2024-07-22 13:11:09.667670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:78744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.569 [2024-07-22 13:11:09.667683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:49:31.569 [2024-07-22 13:11:09.667706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:78752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.569 [2024-07-22 13:11:09.667719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:49:31.569 [2024-07-22 13:11:09.667742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:78760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.569 [2024-07-22 13:11:09.667755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:49:31.569 [2024-07-22 13:11:09.667778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:78768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.569 [2024-07-22 13:11:09.667791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:49:31.569 [2024-07-22 13:11:09.667814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:78776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.569 [2024-07-22 13:11:09.667826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:49:31.569 [2024-07-22 13:11:09.667849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:78784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.569 [2024-07-22 13:11:09.667862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:49:31.569 [2024-07-22 13:11:09.667885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:78792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.569 [2024-07-22 13:11:09.667898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:49:31.569 [2024-07-22 13:11:09.667922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:78056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.569 [2024-07-22 13:11:09.667941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:49:31.569 [2024-07-22 13:11:09.667966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:78064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.569 [2024-07-22 13:11:09.667980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:49:31.569 [2024-07-22 13:11:09.668003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:78072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.569 [2024-07-22 13:11:09.668016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:49:31.569 [2024-07-22 13:11:09.668038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:78096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.569 [2024-07-22 13:11:09.668052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:49:31.569 [2024-07-22 13:11:09.668075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:78112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.569 [2024-07-22 13:11:09.668088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:49:31.569 [2024-07-22 13:11:09.668111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:78120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.569 [2024-07-22 13:11:09.668124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:49:31.569 [2024-07-22 13:11:09.668163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:78128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.569 [2024-07-22 13:11:09.668193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:49:31.569 [2024-07-22 13:11:09.668228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:78144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.569 [2024-07-22 13:11:09.668242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:49:31.569 [2024-07-22 13:11:09.668271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:78800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.569 [2024-07-22 13:11:09.668284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:49:31.569 [2024-07-22 13:11:09.668308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.569 [2024-07-22 13:11:09.668322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:49:31.569 [2024-07-22 13:11:09.668346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:78816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.569 [2024-07-22 13:11:09.668360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:49:31.569 [2024-07-22 13:11:09.668384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:78160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.569 [2024-07-22 13:11:09.668397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:49:31.569 [2024-07-22 13:11:09.668422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:78168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.569 [2024-07-22 13:11:09.668443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:49:31.569 [2024-07-22 13:11:09.668469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:78176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.569 [2024-07-22 13:11:09.668483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:49:31.569 [2024-07-22 13:11:09.668523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:78184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.569 [2024-07-22 13:11:09.668536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:49:31.569 [2024-07-22 13:11:09.668574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:78192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.569 [2024-07-22 13:11:09.668587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:49:31.569 [2024-07-22 13:11:09.668610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:78200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.570 [2024-07-22 13:11:09.668624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:49:31.570 [2024-07-22 13:11:09.668647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.570 [2024-07-22 13:11:09.668660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:49:31.570 [2024-07-22 13:11:09.668684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:78216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.570 [2024-07-22 13:11:09.668696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:49:31.570 [2024-07-22 13:11:09.668719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:78824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.570 [2024-07-22 13:11:09.668732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:49:31.570 [2024-07-22 13:11:09.668756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:78832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.570 [2024-07-22 13:11:09.668769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:49:31.570 [2024-07-22 13:11:09.668792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:78840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.570 [2024-07-22 13:11:09.668804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:49:31.570 [2024-07-22 13:11:09.668827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:78848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.570 [2024-07-22 13:11:09.668840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:49:31.570 [2024-07-22 13:11:09.668863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:78856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.570 [2024-07-22 13:11:09.668876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:49:31.570 [2024-07-22 13:11:09.668899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:78864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.570 [2024-07-22 13:11:09.668912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:49:31.570 [2024-07-22 13:11:09.668941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:78232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.570 [2024-07-22 13:11:09.668955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:49:31.570 [2024-07-22 13:11:09.668978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:78240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.570 [2024-07-22 13:11:09.668991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:49:31.570 [2024-07-22 13:11:09.669031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:78248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.570 [2024-07-22 13:11:09.669045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:49:31.570 [2024-07-22 13:11:09.669068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:78272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.570 [2024-07-22 13:11:09.669081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:49:31.570 [2024-07-22 13:11:09.669123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:78288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.570 [2024-07-22 13:11:09.669137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:49:31.570 [2024-07-22 13:11:09.669161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:78296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.570 [2024-07-22 13:11:09.669191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:49:31.570 [2024-07-22 13:11:09.669228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:78304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.570 [2024-07-22 13:11:09.669245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:49:31.570 [2024-07-22 13:11:09.669271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:78320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.570 [2024-07-22 13:11:09.669286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:49:31.570 [2024-07-22 13:11:16.703382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:92312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.570 [2024-07-22 13:11:16.703450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:49:31.570 [2024-07-22 13:11:16.703532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:92320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.570 [2024-07-22 13:11:16.703551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:49:31.570 [2024-07-22 13:11:16.703571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:92328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.570 [2024-07-22 13:11:16.703585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:49:31.570 [2024-07-22 13:11:16.703605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:92336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.570 [2024-07-22 13:11:16.703618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:49:31.570 [2024-07-22 13:11:16.703654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:92344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.570 [2024-07-22 13:11:16.703669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:49:31.570 [2024-07-22 13:11:16.703687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:92352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.570 [2024-07-22 13:11:16.703700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:49:31.570 [2024-07-22 13:11:16.703718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:92360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.570 [2024-07-22 13:11:16.703731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:49:31.570 [2024-07-22 13:11:16.703749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:91752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.570 [2024-07-22 13:11:16.703761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:49:31.570 [2024-07-22 13:11:16.703781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:91760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.570 [2024-07-22 13:11:16.703794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:49:31.570 [2024-07-22 13:11:16.703813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:91776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.570 [2024-07-22 13:11:16.703826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:49:31.570 [2024-07-22 13:11:16.703844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:91800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.570 [2024-07-22 13:11:16.703856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:49:31.570 [2024-07-22 13:11:16.703877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:91808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.570 [2024-07-22 13:11:16.703890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:49:31.570 [2024-07-22 13:11:16.703908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:91848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.570 [2024-07-22 13:11:16.703920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:49:31.570 [2024-07-22 13:11:16.703938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:91856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.570 [2024-07-22 13:11:16.703951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:49:31.570 [2024-07-22 13:11:16.703969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:91864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.570 [2024-07-22 13:11:16.703981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:49:31.570 [2024-07-22 13:11:16.703999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:92368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.570 [2024-07-22 13:11:16.704012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:49:31.570 [2024-07-22 13:11:16.704030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:92376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.570 [2024-07-22 13:11:16.704092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:49:31.570 [2024-07-22 13:11:16.704116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:92384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.570 [2024-07-22 13:11:16.704130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:49:31.570 [2024-07-22 13:11:16.704165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:92392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.570 [2024-07-22 13:11:16.704209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:49:31.570 [2024-07-22 13:11:16.704234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:92400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.570 [2024-07-22 13:11:16.704250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:49:31.570 [2024-07-22 13:11:16.704271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:92408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.570 [2024-07-22 13:11:16.704286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:49:31.570 [2024-07-22 13:11:16.704974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:92416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.571 [2024-07-22 13:11:16.705001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:49:31.571 [2024-07-22 13:11:16.705029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:92424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.571 [2024-07-22 13:11:16.705045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:49:31.571 [2024-07-22 13:11:16.705074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:92432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.571 [2024-07-22 13:11:16.705089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:49:31.571 [2024-07-22 13:11:16.705112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:92440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.571 [2024-07-22 13:11:16.705126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:49:31.571 [2024-07-22 13:11:16.705166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:92448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.571 [2024-07-22 13:11:16.705180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:49:31.571 [2024-07-22 13:11:16.705204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:92456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.571 [2024-07-22 13:11:16.705232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:49:31.571 [2024-07-22 13:11:16.705259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:92464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.571 [2024-07-22 13:11:16.705273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:49:31.571 [2024-07-22 13:11:16.705297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:92472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.571 [2024-07-22 13:11:16.705324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:49:31.571 [2024-07-22 13:11:16.705351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:92480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.571 [2024-07-22 13:11:16.705366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:49:31.571 [2024-07-22 13:11:16.705389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:92488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.571 [2024-07-22 13:11:16.705404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:49:31.571 [2024-07-22 13:11:16.705427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:92496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.571 [2024-07-22 13:11:16.705442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:49:31.571 [2024-07-22 13:11:16.705465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:92504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.571 [2024-07-22 13:11:16.705480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:49:31.571 [2024-07-22 13:11:16.705505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:92512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.571 [2024-07-22 13:11:16.705519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:49:31.571 [2024-07-22 13:11:16.705573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:91872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.571 [2024-07-22 13:11:16.705588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:49:31.571 [2024-07-22 13:11:16.705611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:91888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.571 [2024-07-22 13:11:16.705624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:49:31.571 [2024-07-22 13:11:16.705646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:91944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.571 [2024-07-22 13:11:16.705660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:49:31.571 [2024-07-22 13:11:16.705683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:91960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.571 [2024-07-22 13:11:16.705696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:49:31.571 [2024-07-22 13:11:16.705718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:91968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.571 [2024-07-22 13:11:16.705732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:49:31.571 [2024-07-22 13:11:16.705754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:91976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.571 [2024-07-22 13:11:16.705768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:49:31.571 [2024-07-22 13:11:16.705790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:92008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.571 [2024-07-22 13:11:16.705804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:49:31.571 [2024-07-22 13:11:16.705834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:92032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.571 [2024-07-22 13:11:16.705850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:49:31.571 [2024-07-22 13:11:16.705872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:92520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.571 [2024-07-22 13:11:16.705887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:49:31.571 [2024-07-22 13:11:16.705909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:92528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.571 [2024-07-22 13:11:16.705922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:49:31.571 [2024-07-22 13:11:16.705945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:92536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.571 [2024-07-22 13:11:16.705958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:49:31.571 [2024-07-22 13:11:16.705982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:92544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.571 [2024-07-22 13:11:16.705996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:49:31.571 [2024-07-22 13:11:16.706018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:92552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.571 [2024-07-22 13:11:16.706032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:49:31.571 [2024-07-22 13:11:16.706054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:92560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.571 [2024-07-22 13:11:16.706068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:49:31.571 [2024-07-22 13:11:16.706090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:92568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.571 [2024-07-22 13:11:16.706103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:49:31.571 [2024-07-22 13:11:16.706126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:92576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.571 [2024-07-22 13:11:16.706140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:49:31.571 [2024-07-22 13:11:16.706191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:92584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.571 [2024-07-22 13:11:16.706208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:49:31.571 [2024-07-22 13:11:16.706232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:92592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.571 [2024-07-22 13:11:16.706247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:49:31.571 [2024-07-22 13:11:16.706271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:92600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.571 [2024-07-22 13:11:16.706286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:49:31.571 [2024-07-22 13:11:16.706317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:92608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.571 [2024-07-22 13:11:16.706333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:49:31.571 [2024-07-22 13:11:16.706357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:92616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.571 [2024-07-22 13:11:16.706371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:49:31.571 [2024-07-22 13:11:16.706395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:92624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.571 [2024-07-22 13:11:16.706409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:49:31.571 [2024-07-22 13:11:16.706433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:92632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.571 [2024-07-22 13:11:16.706447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:49:31.571 [2024-07-22 13:11:16.706472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:92048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.571 [2024-07-22 13:11:16.706486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:49:31.571 [2024-07-22 13:11:16.706510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:92056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.571 [2024-07-22 13:11:16.706536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:49:31.571 [2024-07-22 13:11:16.706571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:92112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.571 [2024-07-22 13:11:16.706585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:49:31.571 [2024-07-22 13:11:16.706607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:92136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.572 [2024-07-22 13:11:16.706648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:49:31.572 [2024-07-22 13:11:16.706673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:92152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.572 [2024-07-22 13:11:16.706687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:49:31.572 [2024-07-22 13:11:16.706711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:92160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.572 [2024-07-22 13:11:16.706725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:49:31.572 [2024-07-22 13:11:16.706749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:92176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.572 [2024-07-22 13:11:16.706763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:49:31.572 [2024-07-22 13:11:16.706787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:92184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.572 [2024-07-22 13:11:16.706802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:49:31.572 [2024-07-22 13:11:16.706833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:92640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.572 [2024-07-22 13:11:16.706849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:49:31.572 [2024-07-22 13:11:16.706873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:92648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.572 [2024-07-22 13:11:16.706887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:49:31.572 [2024-07-22 13:11:16.706910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:92656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.572 [2024-07-22 13:11:16.706944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:49:31.572 [2024-07-22 13:11:16.706968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:92664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.572 [2024-07-22 13:11:16.706982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:49:31.572 [2024-07-22 13:11:16.707005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:92672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.572 [2024-07-22 13:11:16.707019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.572 [2024-07-22 13:11:16.707056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:92680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.572 [2024-07-22 13:11:16.707071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:31.572 [2024-07-22 13:11:16.707261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:92688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.572 [2024-07-22 13:11:16.707287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:49:31.572 [2024-07-22 13:11:16.707318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:92696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.572 [2024-07-22 13:11:16.707335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:49:31.572 [2024-07-22 13:11:16.707363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:92704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.572 [2024-07-22 13:11:16.707378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:49:31.572 [2024-07-22 13:11:16.707405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:92712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.572 [2024-07-22 13:11:16.707419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:49:31.572 [2024-07-22 13:11:16.707446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:92720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.572 [2024-07-22 13:11:16.707461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:49:31.572 [2024-07-22 13:11:16.707489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:92728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.572 [2024-07-22 13:11:16.707546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:49:31.572 [2024-07-22 13:11:16.707571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:92736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.572 [2024-07-22 13:11:16.707595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:49:31.572 [2024-07-22 13:11:16.707623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.572 [2024-07-22 13:11:16.707637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:49:31.572 [2024-07-22 13:11:16.707663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:92192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.572 [2024-07-22 13:11:16.707677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:49:31.572 [2024-07-22 13:11:16.707719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:92200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.572 [2024-07-22 13:11:16.707733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:49:31.572 [2024-07-22 13:11:16.707759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:92208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.572 [2024-07-22 13:11:16.707773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:49:31.572 [2024-07-22 13:11:16.707799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:92216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.572 [2024-07-22 13:11:16.707813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:49:31.572 [2024-07-22 13:11:16.707840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:92224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.572 [2024-07-22 13:11:16.707860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:49:31.572 [2024-07-22 13:11:16.707887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:92232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.572 [2024-07-22 13:11:16.707901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:49:31.572 [2024-07-22 13:11:16.707928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:92256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.572 [2024-07-22 13:11:16.707942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:49:31.572 [2024-07-22 13:11:16.707968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:92272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.572 [2024-07-22 13:11:16.707982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:49:31.572 [2024-07-22 13:11:16.708009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:92752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.572 [2024-07-22 13:11:16.708022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:49:31.572 [2024-07-22 13:11:16.708063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:92760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.572 [2024-07-22 13:11:16.708076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:49:31.572 [2024-07-22 13:11:16.708101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:92768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.572 [2024-07-22 13:11:16.708122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:49:31.572 [2024-07-22 13:11:16.708165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:92776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.572 [2024-07-22 13:11:16.708197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:49:31.572 [2024-07-22 13:11:16.708242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:92784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.572 [2024-07-22 13:11:16.708261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:49:31.572 [2024-07-22 13:11:16.708289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:92792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.572 [2024-07-22 13:11:16.708304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:49:31.572 [2024-07-22 13:11:16.708331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:92800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.572 [2024-07-22 13:11:16.708346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:49:31.572 [2024-07-22 13:11:16.708373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:92808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.572 [2024-07-22 13:11:16.708388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:49:31.572 [2024-07-22 13:11:16.708415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:92816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.572 [2024-07-22 13:11:16.708430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:49:31.572 [2024-07-22 13:11:16.708456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:92824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.572 [2024-07-22 13:11:16.708471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:49:31.572 [2024-07-22 13:11:16.708498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:92832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.573 [2024-07-22 13:11:16.708512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:49:31.573 [2024-07-22 13:11:16.708568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:92840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.573 [2024-07-22 13:11:16.708582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:49:31.573 [2024-07-22 13:11:16.708607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:92848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.573 [2024-07-22 13:11:16.708627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:49:31.573 [2024-07-22 13:11:16.708653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:92856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.573 [2024-07-22 13:11:16.708667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:49:31.573 [2024-07-22 13:11:16.708693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:92864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.573 [2024-07-22 13:11:16.708708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:49:31.573 [2024-07-22 13:11:16.708741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:92872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.573 [2024-07-22 13:11:16.708756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:49:31.573 [2024-07-22 13:11:16.708782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:92880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.573 [2024-07-22 13:11:16.708796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:49:31.573 [2024-07-22 13:11:16.708822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:92888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.573 [2024-07-22 13:11:16.708836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:49:31.573 [2024-07-22 13:11:16.708861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:92896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.573 [2024-07-22 13:11:16.708875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:49:31.573 [2024-07-22 13:11:16.708900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:92904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.573 [2024-07-22 13:11:16.708914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:49:31.573 [2024-07-22 13:11:16.708940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:92912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.573 [2024-07-22 13:11:16.708954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:49:31.573 [2024-07-22 13:11:16.708979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:92920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.573 [2024-07-22 13:11:16.708993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:49:31.573 [2024-07-22 13:11:16.709019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:92928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.573 [2024-07-22 13:11:16.709032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:49:31.573 [2024-07-22 13:11:16.709058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:92936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.573 [2024-07-22 13:11:16.709072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:49:31.573 [2024-07-22 13:11:16.709097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:92944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.573 [2024-07-22 13:11:16.709111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:49:31.573 [2024-07-22 13:11:16.709136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:92952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.573 [2024-07-22 13:11:16.709150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:49:31.573 [2024-07-22 13:11:29.957428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:118232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.573 [2024-07-22 13:11:29.957474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.573 [2024-07-22 13:11:29.957518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:118240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.573 [2024-07-22 13:11:29.957565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.573 [2024-07-22 13:11:29.957579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:118272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.573 [2024-07-22 13:11:29.957592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.573 [2024-07-22 13:11:29.957606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:118280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.573 [2024-07-22 13:11:29.957626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.573 [2024-07-22 13:11:29.957639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:118288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.573 [2024-07-22 13:11:29.957651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.573 [2024-07-22 13:11:29.957665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:118296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.573 [2024-07-22 13:11:29.957677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.573 [2024-07-22 13:11:29.957690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:118320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.573 [2024-07-22 13:11:29.957702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.573 [2024-07-22 13:11:29.957715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:118328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.573 [2024-07-22 13:11:29.957727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.573 [2024-07-22 13:11:29.957740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:118352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.573 [2024-07-22 13:11:29.957752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.573 [2024-07-22 13:11:29.957766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:118360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.573 [2024-07-22 13:11:29.957778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.573 [2024-07-22 13:11:29.957791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:118368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.573 [2024-07-22 13:11:29.957803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.573 [2024-07-22 13:11:29.957816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:118376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.573 [2024-07-22 13:11:29.957828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.573 [2024-07-22 13:11:29.957841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:118416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.573 [2024-07-22 13:11:29.957853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.573 [2024-07-22 13:11:29.957867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:118432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.573 [2024-07-22 13:11:29.957887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.573 [2024-07-22 13:11:29.957903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:117736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.573 [2024-07-22 13:11:29.957915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.573 [2024-07-22 13:11:29.957929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:117744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.573 [2024-07-22 13:11:29.957941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.573 [2024-07-22 13:11:29.957971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:117760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.573 [2024-07-22 13:11:29.957986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.573 [2024-07-22 13:11:29.958001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:117776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.573 [2024-07-22 13:11:29.958014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.574 [2024-07-22 13:11:29.958040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:117784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.574 [2024-07-22 13:11:29.958052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.574 [2024-07-22 13:11:29.958083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:117792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.574 [2024-07-22 13:11:29.958096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.574 [2024-07-22 13:11:29.958111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:117800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.574 [2024-07-22 13:11:29.958124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.574 [2024-07-22 13:11:29.958138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:117824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.574 [2024-07-22 13:11:29.958167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.574 [2024-07-22 13:11:29.958182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:117832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.574 [2024-07-22 13:11:29.958195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.574 [2024-07-22 13:11:29.958210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:117864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.574 [2024-07-22 13:11:29.958236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.574 [2024-07-22 13:11:29.958265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:117888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.574 [2024-07-22 13:11:29.958278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.574 [2024-07-22 13:11:29.958293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:117896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.574 [2024-07-22 13:11:29.958306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.574 [2024-07-22 13:11:29.958321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:117912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.574 [2024-07-22 13:11:29.958346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.574 [2024-07-22 13:11:29.958363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:117928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.574 [2024-07-22 13:11:29.958377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.574 [2024-07-22 13:11:29.958392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:117944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.574 [2024-07-22 13:11:29.958405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.574 [2024-07-22 13:11:29.958420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:117984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.574 [2024-07-22 13:11:29.958433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.574 [2024-07-22 13:11:29.958448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:118456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.574 [2024-07-22 13:11:29.958462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.574 [2024-07-22 13:11:29.958477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:118464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.574 [2024-07-22 13:11:29.958490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.574 [2024-07-22 13:11:29.958505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:118472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.574 [2024-07-22 13:11:29.958529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.574 [2024-07-22 13:11:29.958544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:118480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.574 [2024-07-22 13:11:29.958556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.574 [2024-07-22 13:11:29.958570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:118488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.574 [2024-07-22 13:11:29.958583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.574 [2024-07-22 13:11:29.958596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:118496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.574 [2024-07-22 13:11:29.958609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.574 [2024-07-22 13:11:29.958650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:118504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.574 [2024-07-22 13:11:29.958664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.574 [2024-07-22 13:11:29.958679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:118512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.574 [2024-07-22 13:11:29.958692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.574 [2024-07-22 13:11:29.958706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:118520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.574 [2024-07-22 13:11:29.958720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.574 [2024-07-22 13:11:29.958742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:118528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.574 [2024-07-22 13:11:29.958756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.574 [2024-07-22 13:11:29.958771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:118536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.574 [2024-07-22 13:11:29.958784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.574 [2024-07-22 13:11:29.958799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:118544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.574 [2024-07-22 13:11:29.958813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.574 [2024-07-22 13:11:29.958828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:118552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.574 [2024-07-22 13:11:29.958840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.574 [2024-07-22 13:11:29.958855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:118560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.574 [2024-07-22 13:11:29.958869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.574 [2024-07-22 13:11:29.958884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:118568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.574 [2024-07-22 13:11:29.958897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.574 [2024-07-22 13:11:29.958912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:118576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.574 [2024-07-22 13:11:29.958925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.574 [2024-07-22 13:11:29.958957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:118584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.574 [2024-07-22 13:11:29.958969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.574 [2024-07-22 13:11:29.958983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:118592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.574 [2024-07-22 13:11:29.958995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.574 [2024-07-22 13:11:29.959009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:118600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.574 [2024-07-22 13:11:29.959028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.574 [2024-07-22 13:11:29.959053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:118608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.574 [2024-07-22 13:11:29.959065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.574 [2024-07-22 13:11:29.959079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:118616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.574 [2024-07-22 13:11:29.959091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.574 [2024-07-22 13:11:29.959104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:118624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.574 [2024-07-22 13:11:29.959121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.574 [2024-07-22 13:11:29.959136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:118632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.574 [2024-07-22 13:11:29.959160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.574 [2024-07-22 13:11:29.959187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:118640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.574 [2024-07-22 13:11:29.959214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.574 [2024-07-22 13:11:29.959231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:118648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.574 [2024-07-22 13:11:29.959244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.574 [2024-07-22 13:11:29.959259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:118656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.574 [2024-07-22 13:11:29.959272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.574 [2024-07-22 13:11:29.959287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:118664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.574 [2024-07-22 13:11:29.959300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.575 [2024-07-22 13:11:29.959314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:118672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.575 [2024-07-22 13:11:29.959328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.575 [2024-07-22 13:11:29.959344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:118680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.575 [2024-07-22 13:11:29.959356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.575 [2024-07-22 13:11:29.959371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:118688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.575 [2024-07-22 13:11:29.959384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.575 [2024-07-22 13:11:29.959399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:118696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.575 [2024-07-22 13:11:29.959412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.575 [2024-07-22 13:11:29.959427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:118000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.575 [2024-07-22 13:11:29.959440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.575 [2024-07-22 13:11:29.959455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:118016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.575 [2024-07-22 13:11:29.959468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.575 [2024-07-22 13:11:29.959498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:118032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.575 [2024-07-22 13:11:29.959526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.575 [2024-07-22 13:11:29.959571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:118040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.575 [2024-07-22 13:11:29.959585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.575 [2024-07-22 13:11:29.959599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:118048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.575 [2024-07-22 13:11:29.959611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.575 [2024-07-22 13:11:29.959629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:118056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.575 [2024-07-22 13:11:29.959641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.575 [2024-07-22 13:11:29.959655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:118072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.575 [2024-07-22 13:11:29.959666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.575 [2024-07-22 13:11:29.959680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:118080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.575 [2024-07-22 13:11:29.959692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.575 [2024-07-22 13:11:29.959705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:118096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.575 [2024-07-22 13:11:29.959717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.575 [2024-07-22 13:11:29.959730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:118104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.575 [2024-07-22 13:11:29.959741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.575 [2024-07-22 13:11:29.959755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:118128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.575 [2024-07-22 13:11:29.959767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.575 [2024-07-22 13:11:29.959780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:118136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.575 [2024-07-22 13:11:29.959792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.575 [2024-07-22 13:11:29.959805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:118168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.575 [2024-07-22 13:11:29.959817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.575 [2024-07-22 13:11:29.959830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:118176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.575 [2024-07-22 13:11:29.959842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.575 [2024-07-22 13:11:29.959855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:118200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.575 [2024-07-22 13:11:29.959867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.575 [2024-07-22 13:11:29.959880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:118208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.575 [2024-07-22 13:11:29.959897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.575 [2024-07-22 13:11:29.959911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:118704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.575 [2024-07-22 13:11:29.959923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.575 [2024-07-22 13:11:29.959936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:118712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.575 [2024-07-22 13:11:29.959948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.575 [2024-07-22 13:11:29.959961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:118720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.575 [2024-07-22 13:11:29.959973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.575 [2024-07-22 13:11:29.959986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:118728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.575 [2024-07-22 13:11:29.960001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.575 [2024-07-22 13:11:29.960015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:118736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.575 [2024-07-22 13:11:29.960028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.575 [2024-07-22 13:11:29.960045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:118744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.575 [2024-07-22 13:11:29.960057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.575 [2024-07-22 13:11:29.960071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:118752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.575 [2024-07-22 13:11:29.960083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.575 [2024-07-22 13:11:29.960096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:118760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.575 [2024-07-22 13:11:29.960108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.575 [2024-07-22 13:11:29.960122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:118768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.575 [2024-07-22 13:11:29.960134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.575 [2024-07-22 13:11:29.960163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:118776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.575 [2024-07-22 13:11:29.960193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.575 [2024-07-22 13:11:29.960207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:118784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.575 [2024-07-22 13:11:29.960232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.575 [2024-07-22 13:11:29.960247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:118792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.575 [2024-07-22 13:11:29.960260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.575 [2024-07-22 13:11:29.960283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:118800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.575 [2024-07-22 13:11:29.960304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.575 [2024-07-22 13:11:29.960319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.575 [2024-07-22 13:11:29.960332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.575 [2024-07-22 13:11:29.960346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:118816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.575 [2024-07-22 13:11:29.960359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.575 [2024-07-22 13:11:29.960374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:118824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.575 [2024-07-22 13:11:29.960387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.575 [2024-07-22 13:11:29.960402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:118832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.575 [2024-07-22 13:11:29.960415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.575 [2024-07-22 13:11:29.960429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:118840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.575 [2024-07-22 13:11:29.960443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.575 [2024-07-22 13:11:29.960457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:118848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.575 [2024-07-22 13:11:29.960470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.575 [2024-07-22 13:11:29.960496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:118856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.575 [2024-07-22 13:11:29.960512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.576 [2024-07-22 13:11:29.960527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:118216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.576 [2024-07-22 13:11:29.960555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.576 [2024-07-22 13:11:29.960588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:118224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.576 [2024-07-22 13:11:29.960600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.576 [2024-07-22 13:11:29.960614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:118248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.576 [2024-07-22 13:11:29.960625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.576 [2024-07-22 13:11:29.960639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:118256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.576 [2024-07-22 13:11:29.960651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.576 [2024-07-22 13:11:29.960664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:118264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.576 [2024-07-22 13:11:29.960676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.576 [2024-07-22 13:11:29.960694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:118304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.576 [2024-07-22 13:11:29.960708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.576 [2024-07-22 13:11:29.960721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:118312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.576 [2024-07-22 13:11:29.960733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.576 [2024-07-22 13:11:29.960746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:118336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.576 [2024-07-22 13:11:29.960758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.576 [2024-07-22 13:11:29.960771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:118864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.576 [2024-07-22 13:11:29.960783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.576 [2024-07-22 13:11:29.960796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:118872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.576 [2024-07-22 13:11:29.960809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.576 [2024-07-22 13:11:29.960822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:118880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.576 [2024-07-22 13:11:29.960834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.576 [2024-07-22 13:11:29.960847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:118888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.576 [2024-07-22 13:11:29.960859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.576 [2024-07-22 13:11:29.960872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:118896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.576 [2024-07-22 13:11:29.960884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.576 [2024-07-22 13:11:29.960897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:118904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.576 [2024-07-22 13:11:29.960909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.576 [2024-07-22 13:11:29.960922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:118912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.576 [2024-07-22 13:11:29.960934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.576 [2024-07-22 13:11:29.960947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:118920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.576 [2024-07-22 13:11:29.960963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.576 [2024-07-22 13:11:29.960976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:118928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.576 [2024-07-22 13:11:29.960988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.576 [2024-07-22 13:11:29.961006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:118936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.576 [2024-07-22 13:11:29.961024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.576 [2024-07-22 13:11:29.961039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:118944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.576 [2024-07-22 13:11:29.961051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.576 [2024-07-22 13:11:29.961064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:118952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.576 [2024-07-22 13:11:29.961076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.576 [2024-07-22 13:11:29.961089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:118960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.576 [2024-07-22 13:11:29.961101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.576 [2024-07-22 13:11:29.961115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:118968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:31.576 [2024-07-22 13:11:29.961126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.576 [2024-07-22 13:11:29.961140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:118976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.576 [2024-07-22 13:11:29.961184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.576 [2024-07-22 13:11:29.961210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:118344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.576 [2024-07-22 13:11:29.961234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.576 [2024-07-22 13:11:29.961250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:118384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.576 [2024-07-22 13:11:29.961263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.576 [2024-07-22 13:11:29.961277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:118392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.576 [2024-07-22 13:11:29.961290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.576 [2024-07-22 13:11:29.961305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:118400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.576 [2024-07-22 13:11:29.961318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.576 [2024-07-22 13:11:29.961333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:118408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.576 [2024-07-22 13:11:29.961346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.576 [2024-07-22 13:11:29.961361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:118424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.576 [2024-07-22 13:11:29.961373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.576 [2024-07-22 13:11:29.961388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:118440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:31.576 [2024-07-22 13:11:29.961401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.576 [2024-07-22 13:11:29.961422] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61d020 is same with the state(5) to be set 00:49:31.576 [2024-07-22 13:11:29.961438] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:49:31.576 [2024-07-22 13:11:29.961449] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:49:31.576 [2024-07-22 13:11:29.961463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:118448 len:8 PRP1 0x0 PRP2 0x0 00:49:31.576 [2024-07-22 13:11:29.961477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.576 [2024-07-22 13:11:29.961550] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61d020 was disconnected and freed. reset controller. 00:49:31.576 [2024-07-22 13:11:29.961700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:49:31.576 [2024-07-22 13:11:29.961725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.576 [2024-07-22 13:11:29.961740] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:49:31.576 [2024-07-22 13:11:29.961752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.576 [2024-07-22 13:11:29.961764] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:49:31.576 [2024-07-22 13:11:29.961776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.576 [2024-07-22 13:11:29.961788] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:49:31.576 [2024-07-22 13:11:29.961800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.576 [2024-07-22 13:11:29.961813] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:49:31.576 [2024-07-22 13:11:29.961825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:31.576 [2024-07-22 13:11:29.961836] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x621e40 is same with the state(5) to be set 00:49:31.576 [2024-07-22 13:11:29.963212] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:31.576 [2024-07-22 13:11:29.963248] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x621e40 (9): Bad file descriptor 00:49:31.576 [2024-07-22 13:11:29.963347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:31.577 [2024-07-22 13:11:29.963404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:31.577 [2024-07-22 13:11:29.963426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x621e40 with addr=10.0.0.2, port=4421 00:49:31.577 [2024-07-22 13:11:29.963441] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x621e40 is same with the state(5) to be set 00:49:31.577 [2024-07-22 13:11:29.963479] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x621e40 (9): Bad file descriptor 00:49:31.577 [2024-07-22 13:11:29.963500] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:31.577 [2024-07-22 13:11:29.963513] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:31.577 [2024-07-22 13:11:29.963526] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:31.577 [2024-07-22 13:11:29.963549] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:31.577 [2024-07-22 13:11:29.963573] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:31.577 [2024-07-22 13:11:40.018471] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:49:31.577 Received shutdown signal, test time was about 55.008904 seconds 00:49:31.577 00:49:31.577 Latency(us) 00:49:31.577 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:49:31.577 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:49:31.577 Verification LBA range: start 0x0 length 0x4000 00:49:31.577 Nvme0n1 : 55.01 11783.30 46.03 0.00 0.00 10845.46 618.12 7015926.69 00:49:31.577 =================================================================================================================== 00:49:31.577 Total : 11783.30 46.03 0.00 0.00 10845.46 618.12 7015926.69 00:49:31.577 13:11:50 -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:49:31.577 13:11:50 -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:49:31.577 13:11:50 -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:49:31.577 13:11:50 -- host/multipath.sh@125 -- # nvmftestfini 00:49:31.577 13:11:50 -- nvmf/common.sh@476 -- # nvmfcleanup 00:49:31.577 13:11:50 -- nvmf/common.sh@116 -- # sync 00:49:31.577 13:11:50 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:49:31.577 13:11:50 -- nvmf/common.sh@119 -- # set +e 00:49:31.577 13:11:50 -- nvmf/common.sh@120 -- # for i in {1..20} 00:49:31.577 13:11:50 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:49:31.577 rmmod nvme_tcp 00:49:31.577 rmmod nvme_fabrics 00:49:31.577 rmmod nvme_keyring 00:49:31.577 13:11:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:49:31.577 13:11:50 -- nvmf/common.sh@123 -- # set -e 00:49:31.577 13:11:50 -- nvmf/common.sh@124 -- # return 0 00:49:31.577 13:11:50 -- nvmf/common.sh@477 -- # '[' -n 98153 ']' 00:49:31.577 13:11:50 -- nvmf/common.sh@478 -- # killprocess 98153 00:49:31.577 13:11:50 -- common/autotest_common.sh@926 -- # '[' -z 98153 ']' 00:49:31.577 13:11:50 -- common/autotest_common.sh@930 -- # kill -0 98153 00:49:31.577 13:11:50 -- common/autotest_common.sh@931 -- # uname 00:49:31.577 13:11:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:49:31.577 13:11:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 98153 00:49:31.577 13:11:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:49:31.577 killing process with pid 98153 00:49:31.577 13:11:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:49:31.577 13:11:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 98153' 00:49:31.577 13:11:50 -- common/autotest_common.sh@945 -- # kill 98153 00:49:31.577 13:11:50 -- common/autotest_common.sh@950 -- # wait 98153 00:49:31.577 13:11:50 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:49:31.577 13:11:50 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:49:31.577 13:11:50 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:49:31.577 13:11:50 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:49:31.577 13:11:50 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:49:31.577 13:11:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:49:31.577 13:11:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:49:31.577 13:11:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:49:31.836 13:11:50 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:49:31.836 00:49:31.836 real 1m0.854s 00:49:31.836 user 2m50.871s 00:49:31.836 sys 0m14.286s 00:49:31.836 13:11:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:49:31.836 ************************************ 00:49:31.836 END TEST nvmf_multipath 00:49:31.836 13:11:51 -- common/autotest_common.sh@10 -- # set +x 00:49:31.836 ************************************ 00:49:31.836 13:11:51 -- nvmf/nvmf.sh@117 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:49:31.836 13:11:51 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:49:31.836 13:11:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:49:31.836 13:11:51 -- common/autotest_common.sh@10 -- # set +x 00:49:31.836 ************************************ 00:49:31.836 START TEST nvmf_timeout 00:49:31.836 ************************************ 00:49:31.836 13:11:51 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:49:31.836 * Looking for test storage... 00:49:31.836 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:49:31.836 13:11:51 -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:49:31.836 13:11:51 -- nvmf/common.sh@7 -- # uname -s 00:49:31.836 13:11:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:49:31.836 13:11:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:49:31.836 13:11:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:49:31.836 13:11:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:49:31.836 13:11:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:49:31.836 13:11:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:49:31.836 13:11:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:49:31.836 13:11:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:49:31.836 13:11:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:49:31.836 13:11:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:49:31.836 13:11:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:864ae4a3-cd96-439b-8ef2-6dab4d992115 00:49:31.836 13:11:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=864ae4a3-cd96-439b-8ef2-6dab4d992115 00:49:31.836 13:11:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:49:31.836 13:11:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:49:31.836 13:11:51 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:49:31.836 13:11:51 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:49:31.836 13:11:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:49:31.836 13:11:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:49:31.836 13:11:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:49:31.837 13:11:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:31.837 13:11:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:31.837 13:11:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:31.837 13:11:51 -- paths/export.sh@5 -- # export PATH 00:49:31.837 13:11:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:31.837 13:11:51 -- nvmf/common.sh@46 -- # : 0 00:49:31.837 13:11:51 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:49:31.837 13:11:51 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:49:31.837 13:11:51 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:49:31.837 13:11:51 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:49:31.837 13:11:51 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:49:31.837 13:11:51 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:49:31.837 13:11:51 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:49:31.837 13:11:51 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:49:31.837 13:11:51 -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:49:31.837 13:11:51 -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:49:31.837 13:11:51 -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:49:31.837 13:11:51 -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:49:31.837 13:11:51 -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:49:31.837 13:11:51 -- host/timeout.sh@19 -- # nvmftestinit 00:49:31.837 13:11:51 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:49:31.837 13:11:51 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:49:31.837 13:11:51 -- nvmf/common.sh@436 -- # prepare_net_devs 00:49:31.837 13:11:51 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:49:31.837 13:11:51 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:49:31.837 13:11:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:49:31.837 13:11:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:49:31.837 13:11:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:49:31.837 13:11:51 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:49:31.837 13:11:51 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:49:31.837 13:11:51 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:49:31.837 13:11:51 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:49:31.837 13:11:51 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:49:31.837 13:11:51 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:49:31.837 13:11:51 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:49:31.837 13:11:51 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:49:31.837 13:11:51 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:49:31.837 13:11:51 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:49:31.837 13:11:51 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:49:31.837 13:11:51 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:49:31.837 13:11:51 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:49:31.837 13:11:51 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:49:31.837 13:11:51 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:49:31.837 13:11:51 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:49:31.837 13:11:51 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:49:31.837 13:11:51 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:49:31.837 13:11:51 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:49:31.837 13:11:51 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:49:31.837 Cannot find device "nvmf_tgt_br" 00:49:31.837 13:11:51 -- nvmf/common.sh@154 -- # true 00:49:31.837 13:11:51 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:49:31.837 Cannot find device "nvmf_tgt_br2" 00:49:31.837 13:11:51 -- nvmf/common.sh@155 -- # true 00:49:31.837 13:11:51 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:49:31.837 13:11:51 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:49:31.837 Cannot find device "nvmf_tgt_br" 00:49:31.837 13:11:51 -- nvmf/common.sh@157 -- # true 00:49:31.837 13:11:51 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:49:31.837 Cannot find device "nvmf_tgt_br2" 00:49:31.837 13:11:51 -- nvmf/common.sh@158 -- # true 00:49:31.837 13:11:51 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:49:32.096 13:11:51 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:49:32.096 13:11:51 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:49:32.096 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:49:32.096 13:11:51 -- nvmf/common.sh@161 -- # true 00:49:32.096 13:11:51 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:49:32.096 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:49:32.096 13:11:51 -- nvmf/common.sh@162 -- # true 00:49:32.096 13:11:51 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:49:32.096 13:11:51 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:49:32.096 13:11:51 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:49:32.096 13:11:51 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:49:32.096 13:11:51 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:49:32.096 13:11:51 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:49:32.096 13:11:51 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:49:32.096 13:11:51 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:49:32.096 13:11:51 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:49:32.096 13:11:51 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:49:32.096 13:11:51 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:49:32.096 13:11:51 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:49:32.096 13:11:51 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:49:32.096 13:11:51 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:49:32.096 13:11:51 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:49:32.096 13:11:51 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:49:32.096 13:11:51 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:49:32.096 13:11:51 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:49:32.096 13:11:51 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:49:32.096 13:11:51 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:49:32.096 13:11:51 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:49:32.096 13:11:51 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:49:32.096 13:11:51 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:49:32.096 13:11:51 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:49:32.096 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:49:32.096 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.085 ms 00:49:32.096 00:49:32.096 --- 10.0.0.2 ping statistics --- 00:49:32.096 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:49:32.096 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:49:32.096 13:11:51 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:49:32.096 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:49:32.096 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:49:32.096 00:49:32.096 --- 10.0.0.3 ping statistics --- 00:49:32.096 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:49:32.096 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:49:32.096 13:11:51 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:49:32.096 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:49:32.096 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:49:32.096 00:49:32.096 --- 10.0.0.1 ping statistics --- 00:49:32.096 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:49:32.096 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:49:32.096 13:11:51 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:49:32.096 13:11:51 -- nvmf/common.sh@421 -- # return 0 00:49:32.096 13:11:51 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:49:32.096 13:11:51 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:49:32.096 13:11:51 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:49:32.096 13:11:51 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:49:32.096 13:11:51 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:49:32.096 13:11:51 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:49:32.096 13:11:51 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:49:32.096 13:11:51 -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:49:32.096 13:11:51 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:49:32.096 13:11:51 -- common/autotest_common.sh@712 -- # xtrace_disable 00:49:32.096 13:11:51 -- common/autotest_common.sh@10 -- # set +x 00:49:32.096 13:11:51 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:49:32.096 13:11:51 -- nvmf/common.sh@469 -- # nvmfpid=99513 00:49:32.096 13:11:51 -- nvmf/common.sh@470 -- # waitforlisten 99513 00:49:32.096 13:11:51 -- common/autotest_common.sh@819 -- # '[' -z 99513 ']' 00:49:32.096 13:11:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:49:32.096 13:11:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:49:32.096 13:11:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:49:32.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:49:32.096 13:11:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:49:32.096 13:11:51 -- common/autotest_common.sh@10 -- # set +x 00:49:32.355 [2024-07-22 13:11:51.559121] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:49:32.355 [2024-07-22 13:11:51.559242] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:49:32.355 [2024-07-22 13:11:51.696963] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:49:32.355 [2024-07-22 13:11:51.764299] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:49:32.355 [2024-07-22 13:11:51.764439] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:49:32.355 [2024-07-22 13:11:51.764453] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:49:32.355 [2024-07-22 13:11:51.764462] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:49:32.355 [2024-07-22 13:11:51.764610] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:49:32.355 [2024-07-22 13:11:51.764636] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:49:33.289 13:11:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:49:33.289 13:11:52 -- common/autotest_common.sh@852 -- # return 0 00:49:33.289 13:11:52 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:49:33.289 13:11:52 -- common/autotest_common.sh@718 -- # xtrace_disable 00:49:33.289 13:11:52 -- common/autotest_common.sh@10 -- # set +x 00:49:33.289 13:11:52 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:49:33.289 13:11:52 -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:49:33.289 13:11:52 -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:49:33.547 [2024-07-22 13:11:52.826323] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:49:33.547 13:11:52 -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:49:33.805 Malloc0 00:49:33.805 13:11:53 -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:49:34.063 13:11:53 -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:49:34.321 13:11:53 -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:49:34.580 [2024-07-22 13:11:53.859668] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:49:34.580 13:11:53 -- host/timeout.sh@32 -- # bdevperf_pid=99604 00:49:34.580 13:11:53 -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:49:34.580 13:11:53 -- host/timeout.sh@34 -- # waitforlisten 99604 /var/tmp/bdevperf.sock 00:49:34.580 13:11:53 -- common/autotest_common.sh@819 -- # '[' -z 99604 ']' 00:49:34.580 13:11:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:49:34.580 13:11:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:49:34.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:49:34.580 13:11:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:49:34.580 13:11:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:49:34.580 13:11:53 -- common/autotest_common.sh@10 -- # set +x 00:49:34.580 [2024-07-22 13:11:53.930268] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:49:34.580 [2024-07-22 13:11:53.930373] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99604 ] 00:49:34.839 [2024-07-22 13:11:54.069054] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:49:34.839 [2024-07-22 13:11:54.137693] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:49:35.774 13:11:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:49:35.774 13:11:54 -- common/autotest_common.sh@852 -- # return 0 00:49:35.774 13:11:54 -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:49:35.774 13:11:55 -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:49:36.031 NVMe0n1 00:49:36.031 13:11:55 -- host/timeout.sh@51 -- # rpc_pid=99652 00:49:36.031 13:11:55 -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:49:36.031 13:11:55 -- host/timeout.sh@53 -- # sleep 1 00:49:36.289 Running I/O for 10 seconds... 00:49:37.225 13:11:56 -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:49:37.225 [2024-07-22 13:11:56.626931] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581360 is same with the state(5) to be set 00:49:37.225 [2024-07-22 13:11:56.627010] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581360 is same with the state(5) to be set 00:49:37.225 [2024-07-22 13:11:56.627034] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581360 is same with the state(5) to be set 00:49:37.225 [2024-07-22 13:11:56.627042] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581360 is same with the state(5) to be set 00:49:37.225 [2024-07-22 13:11:56.627050] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581360 is same with the state(5) to be set 00:49:37.225 [2024-07-22 13:11:56.627059] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581360 is same with the state(5) to be set 00:49:37.225 [2024-07-22 13:11:56.627075] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581360 is same with the state(5) to be set 00:49:37.225 [2024-07-22 13:11:56.627082] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581360 is same with the state(5) to be set 00:49:37.225 [2024-07-22 13:11:56.627089] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581360 is same with the state(5) to be set 00:49:37.225 [2024-07-22 13:11:56.627096] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581360 is same with the state(5) to be set 00:49:37.225 [2024-07-22 13:11:56.627104] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581360 is same with the state(5) to be set 00:49:37.225 [2024-07-22 13:11:56.627111] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581360 is same with the state(5) to be set 00:49:37.225 [2024-07-22 13:11:56.627133] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581360 is same with the state(5) to be set 00:49:37.225 [2024-07-22 13:11:56.627174] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581360 is same with the state(5) to be set 00:49:37.225 [2024-07-22 13:11:56.627183] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581360 is same with the state(5) to be set 00:49:37.225 [2024-07-22 13:11:56.627191] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581360 is same with the state(5) to be set 00:49:37.225 [2024-07-22 13:11:56.627200] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581360 is same with the state(5) to be set 00:49:37.225 [2024-07-22 13:11:56.627208] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581360 is same with the state(5) to be set 00:49:37.225 [2024-07-22 13:11:56.627594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:37.225 [2024-07-22 13:11:56.627623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.226 [2024-07-22 13:11:56.627645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:37.226 [2024-07-22 13:11:56.627655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.226 [2024-07-22 13:11:56.627666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:37.226 [2024-07-22 13:11:56.627681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.226 [2024-07-22 13:11:56.627692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:37.226 [2024-07-22 13:11:56.627701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.226 [2024-07-22 13:11:56.627712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:130944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:37.226 [2024-07-22 13:11:56.627721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.226 [2024-07-22 13:11:56.627732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:130952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:37.226 [2024-07-22 13:11:56.627741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.226 [2024-07-22 13:11:56.627751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:130968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:37.226 [2024-07-22 13:11:56.627760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.226 [2024-07-22 13:11:56.627770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:130976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:37.226 [2024-07-22 13:11:56.627779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.226 [2024-07-22 13:11:56.627789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:130984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:37.226 [2024-07-22 13:11:56.627798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.226 [2024-07-22 13:11:56.627808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:131016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:37.226 [2024-07-22 13:11:56.627817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.226 [2024-07-22 13:11:56.627827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:131032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:37.226 [2024-07-22 13:11:56.627835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.226 [2024-07-22 13:11:56.627846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:131040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:37.226 [2024-07-22 13:11:56.627855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.226 [2024-07-22 13:11:56.627865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:37.226 [2024-07-22 13:11:56.627874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.226 [2024-07-22 13:11:56.627885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:37.226 [2024-07-22 13:11:56.627893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.226 [2024-07-22 13:11:56.627904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:37.226 [2024-07-22 13:11:56.627912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.226 [2024-07-22 13:11:56.627923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:37.226 [2024-07-22 13:11:56.627932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.226 [2024-07-22 13:11:56.627942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:37.226 [2024-07-22 13:11:56.627953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.226 [2024-07-22 13:11:56.627964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:37.226 [2024-07-22 13:11:56.627972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.226 [2024-07-22 13:11:56.627983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:37.226 [2024-07-22 13:11:56.627993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.226 [2024-07-22 13:11:56.628004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:37.226 [2024-07-22 13:11:56.628013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.226 [2024-07-22 13:11:56.628024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:37.226 [2024-07-22 13:11:56.628033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.226 [2024-07-22 13:11:56.628043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:37.226 [2024-07-22 13:11:56.628052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.226 [2024-07-22 13:11:56.628063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:37.226 [2024-07-22 13:11:56.628071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.226 [2024-07-22 13:11:56.628082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:37.226 [2024-07-22 13:11:56.628090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.226 [2024-07-22 13:11:56.628101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:131048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:37.226 [2024-07-22 13:11:56.628110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.226 [2024-07-22 13:11:56.628121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:0 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:37.226 [2024-07-22 13:11:56.628130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.226 [2024-07-22 13:11:56.628140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:16 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:37.226 [2024-07-22 13:11:56.628175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.226 [2024-07-22 13:11:56.628189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:24 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:37.226 [2024-07-22 13:11:56.628198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.226 [2024-07-22 13:11:56.628209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:32 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:37.226 [2024-07-22 13:11:56.628218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.226 [2024-07-22 13:11:56.628229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:48 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:37.226 [2024-07-22 13:11:56.628239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.226 [2024-07-22 13:11:56.628251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:80 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:37.226 [2024-07-22 13:11:56.628260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.226 [2024-07-22 13:11:56.628271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:37.226 [2024-07-22 13:11:56.628280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.226 [2024-07-22 13:11:56.628291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:37.226 [2024-07-22 13:11:56.628300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.226 [2024-07-22 13:11:56.628310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:37.226 [2024-07-22 13:11:56.628319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.226 [2024-07-22 13:11:56.628330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:37.226 [2024-07-22 13:11:56.628340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.226 [2024-07-22 13:11:56.628351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:37.226 [2024-07-22 13:11:56.628360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.226 [2024-07-22 13:11:56.628371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:37.226 [2024-07-22 13:11:56.628380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.226 [2024-07-22 13:11:56.628391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:37.226 [2024-07-22 13:11:56.628400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.226 [2024-07-22 13:11:56.628411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:37.226 [2024-07-22 13:11:56.628420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.226 [2024-07-22 13:11:56.628431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:37.226 [2024-07-22 13:11:56.628440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.227 [2024-07-22 13:11:56.628451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:37.227 [2024-07-22 13:11:56.628460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.227 [2024-07-22 13:11:56.628471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:37.227 [2024-07-22 13:11:56.628481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.227 [2024-07-22 13:11:56.628492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:37.227 [2024-07-22 13:11:56.628501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.227 [2024-07-22 13:11:56.628512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:37.227 [2024-07-22 13:11:56.628521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.227 [2024-07-22 13:11:56.628546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:37.227 [2024-07-22 13:11:56.628554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.227 [2024-07-22 13:11:56.628565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:37.227 [2024-07-22 13:11:56.628574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.227 [2024-07-22 13:11:56.628585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:37.227 [2024-07-22 13:11:56.628594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.227 [2024-07-22 13:11:56.628604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:37.227 [2024-07-22 13:11:56.628613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.227 [2024-07-22 13:11:56.628624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:37.227 [2024-07-22 13:11:56.628633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.227 [2024-07-22 13:11:56.628644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:37.227 [2024-07-22 13:11:56.628653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.227 [2024-07-22 13:11:56.628664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:37.227 [2024-07-22 13:11:56.628673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.227 [2024-07-22 13:11:56.628683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:37.227 [2024-07-22 13:11:56.628692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.227 [2024-07-22 13:11:56.628703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:37.227 [2024-07-22 13:11:56.628711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.227 [2024-07-22 13:11:56.628721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:37.227 [2024-07-22 13:11:56.628730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.227 [2024-07-22 13:11:56.628740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:37.227 [2024-07-22 13:11:56.628748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.227 [2024-07-22 13:11:56.628759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:37.227 [2024-07-22 13:11:56.628768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.227 [2024-07-22 13:11:56.628778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:37.227 [2024-07-22 13:11:56.628786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.227 [2024-07-22 13:11:56.628797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:37.227 [2024-07-22 13:11:56.628805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.227 [2024-07-22 13:11:56.628816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:37.227 [2024-07-22 13:11:56.628825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.227 [2024-07-22 13:11:56.628836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:37.227 [2024-07-22 13:11:56.628845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.227 [2024-07-22 13:11:56.628855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:37.227 [2024-07-22 13:11:56.628866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.227 [2024-07-22 13:11:56.628876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:37.227 [2024-07-22 13:11:56.628885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.227 [2024-07-22 13:11:56.628896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:37.227 [2024-07-22 13:11:56.628905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.227 [2024-07-22 13:11:56.628916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:37.227 [2024-07-22 13:11:56.628924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.227 [2024-07-22 13:11:56.628935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:37.227 [2024-07-22 13:11:56.628943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.227 [2024-07-22 13:11:56.628954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:37.227 [2024-07-22 13:11:56.628962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.227 [2024-07-22 13:11:56.628972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:37.227 [2024-07-22 13:11:56.628981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.227 [2024-07-22 13:11:56.628991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:37.227 [2024-07-22 13:11:56.629015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.227 [2024-07-22 13:11:56.629026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:37.227 [2024-07-22 13:11:56.629035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.227 [2024-07-22 13:11:56.629046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:37.227 [2024-07-22 13:11:56.629055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.227 [2024-07-22 13:11:56.629066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:37.227 [2024-07-22 13:11:56.629075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.227 [2024-07-22 13:11:56.629087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:37.227 [2024-07-22 13:11:56.629096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.227 [2024-07-22 13:11:56.629107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:37.227 [2024-07-22 13:11:56.629116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.227 [2024-07-22 13:11:56.629127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:37.227 [2024-07-22 13:11:56.629136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.227 [2024-07-22 13:11:56.629147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:37.227 [2024-07-22 13:11:56.629162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.227 [2024-07-22 13:11:56.629183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:37.227 [2024-07-22 13:11:56.629192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.227 [2024-07-22 13:11:56.629204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:37.227 [2024-07-22 13:11:56.629214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.227 [2024-07-22 13:11:56.629225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:37.227 [2024-07-22 13:11:56.629234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.227 [2024-07-22 13:11:56.629245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:37.227 [2024-07-22 13:11:56.629254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.228 [2024-07-22 13:11:56.629265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:37.228 [2024-07-22 13:11:56.629274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.228 [2024-07-22 13:11:56.629285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:1000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:37.228 [2024-07-22 13:11:56.629293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.228 [2024-07-22 13:11:56.629304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:1008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:37.228 [2024-07-22 13:11:56.629313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.228 [2024-07-22 13:11:56.629325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:37.228 [2024-07-22 13:11:56.629334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.228 [2024-07-22 13:11:56.629345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:1024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:37.228 [2024-07-22 13:11:56.629354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.228 [2024-07-22 13:11:56.629365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:1032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:37.228 [2024-07-22 13:11:56.629374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.228 [2024-07-22 13:11:56.629385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:37.228 [2024-07-22 13:11:56.629394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.228 [2024-07-22 13:11:56.629405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:1048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:37.228 [2024-07-22 13:11:56.629414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.228 [2024-07-22 13:11:56.629425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:37.228 [2024-07-22 13:11:56.629434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.228 [2024-07-22 13:11:56.629445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:37.228 [2024-07-22 13:11:56.629461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.228 [2024-07-22 13:11:56.629472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:37.228 [2024-07-22 13:11:56.629481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.228 [2024-07-22 13:11:56.629492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:37.228 [2024-07-22 13:11:56.629503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.228 [2024-07-22 13:11:56.629515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:37.228 [2024-07-22 13:11:56.629538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.228 [2024-07-22 13:11:56.629550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:37.228 [2024-07-22 13:11:56.629558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.228 [2024-07-22 13:11:56.629569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:37.228 [2024-07-22 13:11:56.629578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.228 [2024-07-22 13:11:56.629588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:37.228 [2024-07-22 13:11:56.629597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.228 [2024-07-22 13:11:56.629608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:37.228 [2024-07-22 13:11:56.629617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.228 [2024-07-22 13:11:56.629628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:37.228 [2024-07-22 13:11:56.629637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.228 [2024-07-22 13:11:56.629648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:37.228 [2024-07-22 13:11:56.629656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.228 [2024-07-22 13:11:56.629667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:37.228 [2024-07-22 13:11:56.629676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.228 [2024-07-22 13:11:56.629686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:37.228 [2024-07-22 13:11:56.629695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.228 [2024-07-22 13:11:56.629706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:37.228 [2024-07-22 13:11:56.629714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.228 [2024-07-22 13:11:56.629724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:37.228 [2024-07-22 13:11:56.629733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.228 [2024-07-22 13:11:56.629744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:37.228 [2024-07-22 13:11:56.629753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.228 [2024-07-22 13:11:56.629764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:1056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:37.228 [2024-07-22 13:11:56.629773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.228 [2024-07-22 13:11:56.629784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:1064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:37.228 [2024-07-22 13:11:56.629799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.228 [2024-07-22 13:11:56.629809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:1072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:37.228 [2024-07-22 13:11:56.629818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.228 [2024-07-22 13:11:56.629828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:1080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:37.228 [2024-07-22 13:11:56.629837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.228 [2024-07-22 13:11:56.629848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:1088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:37.228 [2024-07-22 13:11:56.629857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.228 [2024-07-22 13:11:56.629868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:1096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:37.228 [2024-07-22 13:11:56.629878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.228 [2024-07-22 13:11:56.629888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:1104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:37.228 [2024-07-22 13:11:56.629897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.228 [2024-07-22 13:11:56.629908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:1112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:37.228 [2024-07-22 13:11:56.629917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.228 [2024-07-22 13:11:56.629927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:1120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:37.228 [2024-07-22 13:11:56.629936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.228 [2024-07-22 13:11:56.629946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:1128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:37.228 [2024-07-22 13:11:56.629955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.228 [2024-07-22 13:11:56.629966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:1136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:37.228 [2024-07-22 13:11:56.629974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.228 [2024-07-22 13:11:56.629985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:1144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:37.228 [2024-07-22 13:11:56.629994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.228 [2024-07-22 13:11:56.630004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:1152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:37.228 [2024-07-22 13:11:56.630013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.228 [2024-07-22 13:11:56.630024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:1160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:37.228 [2024-07-22 13:11:56.630032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.228 [2024-07-22 13:11:56.630043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:1168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:37.228 [2024-07-22 13:11:56.630052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.228 [2024-07-22 13:11:56.630064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:1176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:37.229 [2024-07-22 13:11:56.630073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.229 [2024-07-22 13:11:56.630083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:1184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:37.229 [2024-07-22 13:11:56.630092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.229 [2024-07-22 13:11:56.630103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:37.229 [2024-07-22 13:11:56.630112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.229 [2024-07-22 13:11:56.630122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:37.229 [2024-07-22 13:11:56.630131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.229 [2024-07-22 13:11:56.630142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:37.229 [2024-07-22 13:11:56.630178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.229 [2024-07-22 13:11:56.630191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:37.229 [2024-07-22 13:11:56.630200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.229 [2024-07-22 13:11:56.630211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:37.229 [2024-07-22 13:11:56.630231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.229 [2024-07-22 13:11:56.630242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:37.229 [2024-07-22 13:11:56.630251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.229 [2024-07-22 13:11:56.630263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:37.229 [2024-07-22 13:11:56.630272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.229 [2024-07-22 13:11:56.630282] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee8980 is same with the state(5) to be set 00:49:37.229 [2024-07-22 13:11:56.630294] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:49:37.229 [2024-07-22 13:11:56.630302] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:49:37.229 [2024-07-22 13:11:56.630310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:632 len:8 PRP1 0x0 PRP2 0x0 00:49:37.229 [2024-07-22 13:11:56.630319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:37.229 [2024-07-22 13:11:56.630378] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xee8980 was disconnected and freed. reset controller. 00:49:37.229 [2024-07-22 13:11:56.630642] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:37.229 [2024-07-22 13:11:56.630742] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedd6b0 (9): Bad file descriptor 00:49:37.229 [2024-07-22 13:11:56.630870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:37.229 [2024-07-22 13:11:56.630920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:37.229 [2024-07-22 13:11:56.630937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedd6b0 with addr=10.0.0.2, port=4420 00:49:37.229 [2024-07-22 13:11:56.630947] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedd6b0 is same with the state(5) to be set 00:49:37.229 [2024-07-22 13:11:56.630966] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedd6b0 (9): Bad file descriptor 00:49:37.229 [2024-07-22 13:11:56.630983] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:37.229 [2024-07-22 13:11:56.630993] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:37.229 [2024-07-22 13:11:56.631003] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:37.229 [2024-07-22 13:11:56.631023] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:37.229 [2024-07-22 13:11:56.631033] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:37.487 13:11:56 -- host/timeout.sh@56 -- # sleep 2 00:49:39.412 [2024-07-22 13:11:58.631156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:39.412 [2024-07-22 13:11:58.631273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:39.412 [2024-07-22 13:11:58.631291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedd6b0 with addr=10.0.0.2, port=4420 00:49:39.412 [2024-07-22 13:11:58.631303] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedd6b0 is same with the state(5) to be set 00:49:39.412 [2024-07-22 13:11:58.631326] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedd6b0 (9): Bad file descriptor 00:49:39.412 [2024-07-22 13:11:58.631354] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:39.412 [2024-07-22 13:11:58.631365] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:39.412 [2024-07-22 13:11:58.631375] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:39.412 [2024-07-22 13:11:58.631398] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:39.412 [2024-07-22 13:11:58.631409] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:39.412 13:11:58 -- host/timeout.sh@57 -- # get_controller 00:49:39.412 13:11:58 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:49:39.412 13:11:58 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:49:39.670 13:11:58 -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:49:39.670 13:11:58 -- host/timeout.sh@58 -- # get_bdev 00:49:39.670 13:11:58 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:49:39.670 13:11:58 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:49:39.928 13:11:59 -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:49:39.928 13:11:59 -- host/timeout.sh@61 -- # sleep 5 00:49:41.302 [2024-07-22 13:12:00.631556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:41.302 [2024-07-22 13:12:00.631666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:41.302 [2024-07-22 13:12:00.631684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedd6b0 with addr=10.0.0.2, port=4420 00:49:41.302 [2024-07-22 13:12:00.631697] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedd6b0 is same with the state(5) to be set 00:49:41.302 [2024-07-22 13:12:00.631721] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedd6b0 (9): Bad file descriptor 00:49:41.303 [2024-07-22 13:12:00.631739] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:41.303 [2024-07-22 13:12:00.631748] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:41.303 [2024-07-22 13:12:00.631758] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:41.303 [2024-07-22 13:12:00.631784] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:41.303 [2024-07-22 13:12:00.631795] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:43.831 [2024-07-22 13:12:02.631827] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:43.831 [2024-07-22 13:12:02.631885] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:43.831 [2024-07-22 13:12:02.631913] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:43.831 [2024-07-22 13:12:02.631923] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:49:43.831 [2024-07-22 13:12:02.631949] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:44.397 00:49:44.397 Latency(us) 00:49:44.397 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:49:44.397 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:49:44.397 Verification LBA range: start 0x0 length 0x4000 00:49:44.397 NVMe0n1 : 8.12 2020.49 7.89 15.76 0.00 62777.75 2398.02 7015926.69 00:49:44.397 =================================================================================================================== 00:49:44.397 Total : 2020.49 7.89 15.76 0.00 62777.75 2398.02 7015926.69 00:49:44.397 0 00:49:44.965 13:12:04 -- host/timeout.sh@62 -- # get_controller 00:49:44.965 13:12:04 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:49:44.965 13:12:04 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:49:44.965 13:12:04 -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:49:44.965 13:12:04 -- host/timeout.sh@63 -- # get_bdev 00:49:44.965 13:12:04 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:49:44.965 13:12:04 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:49:45.255 13:12:04 -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:49:45.255 13:12:04 -- host/timeout.sh@65 -- # wait 99652 00:49:45.255 13:12:04 -- host/timeout.sh@67 -- # killprocess 99604 00:49:45.255 13:12:04 -- common/autotest_common.sh@926 -- # '[' -z 99604 ']' 00:49:45.255 13:12:04 -- common/autotest_common.sh@930 -- # kill -0 99604 00:49:45.255 13:12:04 -- common/autotest_common.sh@931 -- # uname 00:49:45.255 13:12:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:49:45.255 13:12:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 99604 00:49:45.518 13:12:04 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:49:45.518 13:12:04 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:49:45.518 killing process with pid 99604 00:49:45.518 13:12:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 99604' 00:49:45.518 Received shutdown signal, test time was about 9.144773 seconds 00:49:45.518 00:49:45.518 Latency(us) 00:49:45.518 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:49:45.518 =================================================================================================================== 00:49:45.518 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:49:45.518 13:12:04 -- common/autotest_common.sh@945 -- # kill 99604 00:49:45.518 13:12:04 -- common/autotest_common.sh@950 -- # wait 99604 00:49:45.518 13:12:04 -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:49:45.776 [2024-07-22 13:12:05.091852] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:49:45.776 13:12:05 -- host/timeout.sh@74 -- # bdevperf_pid=99803 00:49:45.776 13:12:05 -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:49:45.776 13:12:05 -- host/timeout.sh@76 -- # waitforlisten 99803 /var/tmp/bdevperf.sock 00:49:45.776 13:12:05 -- common/autotest_common.sh@819 -- # '[' -z 99803 ']' 00:49:45.776 13:12:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:49:45.776 13:12:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:49:45.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:49:45.776 13:12:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:49:45.776 13:12:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:49:45.776 13:12:05 -- common/autotest_common.sh@10 -- # set +x 00:49:45.776 [2024-07-22 13:12:05.152484] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:49:45.776 [2024-07-22 13:12:05.152581] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99803 ] 00:49:46.035 [2024-07-22 13:12:05.289206] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:49:46.035 [2024-07-22 13:12:05.361352] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:49:46.969 13:12:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:49:46.969 13:12:06 -- common/autotest_common.sh@852 -- # return 0 00:49:46.969 13:12:06 -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:49:46.969 13:12:06 -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:49:47.535 NVMe0n1 00:49:47.535 13:12:06 -- host/timeout.sh@84 -- # rpc_pid=99851 00:49:47.535 13:12:06 -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:49:47.535 13:12:06 -- host/timeout.sh@86 -- # sleep 1 00:49:47.535 Running I/O for 10 seconds... 00:49:48.469 13:12:07 -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:49:48.730 [2024-07-22 13:12:07.933232] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1586940 is same with the state(5) to be set 00:49:48.730 [2024-07-22 13:12:07.933283] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1586940 is same with the state(5) to be set 00:49:48.730 [2024-07-22 13:12:07.933296] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1586940 is same with the state(5) to be set 00:49:48.730 [2024-07-22 13:12:07.933304] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1586940 is same with the state(5) to be set 00:49:48.730 [2024-07-22 13:12:07.933314] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1586940 is same with the state(5) to be set 00:49:48.730 [2024-07-22 13:12:07.933322] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1586940 is same with the state(5) to be set 00:49:48.730 [2024-07-22 13:12:07.933331] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1586940 is same with the state(5) to be set 00:49:48.730 [2024-07-22 13:12:07.933339] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1586940 is same with the state(5) to be set 00:49:48.730 [2024-07-22 13:12:07.933347] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1586940 is same with the state(5) to be set 00:49:48.730 [2024-07-22 13:12:07.933355] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1586940 is same with the state(5) to be set 00:49:48.730 [2024-07-22 13:12:07.933363] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1586940 is same with the state(5) to be set 00:49:48.730 [2024-07-22 13:12:07.933372] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1586940 is same with the state(5) to be set 00:49:48.730 [2024-07-22 13:12:07.933380] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1586940 is same with the state(5) to be set 00:49:48.730 [2024-07-22 13:12:07.933388] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1586940 is same with the state(5) to be set 00:49:48.730 [2024-07-22 13:12:07.933396] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1586940 is same with the state(5) to be set 00:49:48.730 [2024-07-22 13:12:07.933404] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1586940 is same with the state(5) to be set 00:49:48.730 [2024-07-22 13:12:07.933412] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1586940 is same with the state(5) to be set 00:49:48.730 [2024-07-22 13:12:07.933420] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1586940 is same with the state(5) to be set 00:49:48.730 [2024-07-22 13:12:07.933428] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1586940 is same with the state(5) to be set 00:49:48.730 [2024-07-22 13:12:07.933436] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1586940 is same with the state(5) to be set 00:49:48.730 [2024-07-22 13:12:07.933444] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1586940 is same with the state(5) to be set 00:49:48.730 [2024-07-22 13:12:07.933452] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1586940 is same with the state(5) to be set 00:49:48.730 [2024-07-22 13:12:07.933460] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1586940 is same with the state(5) to be set 00:49:48.730 [2024-07-22 13:12:07.933468] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1586940 is same with the state(5) to be set 00:49:48.730 [2024-07-22 13:12:07.933476] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1586940 is same with the state(5) to be set 00:49:48.730 [2024-07-22 13:12:07.933483] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1586940 is same with the state(5) to be set 00:49:48.730 [2024-07-22 13:12:07.933491] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1586940 is same with the state(5) to be set 00:49:48.730 [2024-07-22 13:12:07.933499] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1586940 is same with the state(5) to be set 00:49:48.730 [2024-07-22 13:12:07.933507] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1586940 is same with the state(5) to be set 00:49:48.730 [2024-07-22 13:12:07.933536] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1586940 is same with the state(5) to be set 00:49:48.730 [2024-07-22 13:12:07.933560] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1586940 is same with the state(5) to be set 00:49:48.730 [2024-07-22 13:12:07.933569] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1586940 is same with the state(5) to be set 00:49:48.730 [2024-07-22 13:12:07.933577] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1586940 is same with the state(5) to be set 00:49:48.730 [2024-07-22 13:12:07.933585] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1586940 is same with the state(5) to be set 00:49:48.730 [2024-07-22 13:12:07.933602] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1586940 is same with the state(5) to be set 00:49:48.730 [2024-07-22 13:12:07.933610] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1586940 is same with the state(5) to be set 00:49:48.730 [2024-07-22 13:12:07.933618] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1586940 is same with the state(5) to be set 00:49:48.730 [2024-07-22 13:12:07.933627] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1586940 is same with the state(5) to be set 00:49:48.730 [2024-07-22 13:12:07.933635] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1586940 is same with the state(5) to be set 00:49:48.730 [2024-07-22 13:12:07.933643] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1586940 is same with the state(5) to be set 00:49:48.730 [2024-07-22 13:12:07.933651] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1586940 is same with the state(5) to be set 00:49:48.730 [2024-07-22 13:12:07.933660] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1586940 is same with the state(5) to be set 00:49:48.730 [2024-07-22 13:12:07.933668] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1586940 is same with the state(5) to be set 00:49:48.730 [2024-07-22 13:12:07.933676] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1586940 is same with the state(5) to be set 00:49:48.730 [2024-07-22 13:12:07.933684] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1586940 is same with the state(5) to be set 00:49:48.730 [2024-07-22 13:12:07.933692] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1586940 is same with the state(5) to be set 00:49:48.730 [2024-07-22 13:12:07.933700] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1586940 is same with the state(5) to be set 00:49:48.730 [2024-07-22 13:12:07.933708] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1586940 is same with the state(5) to be set 00:49:48.730 [2024-07-22 13:12:07.933716] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1586940 is same with the state(5) to be set 00:49:48.730 [2024-07-22 13:12:07.933724] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1586940 is same with the state(5) to be set 00:49:48.730 [2024-07-22 13:12:07.933732] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1586940 is same with the state(5) to be set 00:49:48.730 [2024-07-22 13:12:07.933740] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1586940 is same with the state(5) to be set 00:49:48.730 [2024-07-22 13:12:07.933748] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1586940 is same with the state(5) to be set 00:49:48.730 [2024-07-22 13:12:07.933756] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1586940 is same with the state(5) to be set 00:49:48.730 [2024-07-22 13:12:07.933764] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1586940 is same with the state(5) to be set 00:49:48.730 [2024-07-22 13:12:07.933772] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1586940 is same with the state(5) to be set 00:49:48.730 [2024-07-22 13:12:07.933781] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1586940 is same with the state(5) to be set 00:49:48.730 [2024-07-22 13:12:07.933789] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1586940 is same with the state(5) to be set 00:49:48.730 [2024-07-22 13:12:07.933801] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1586940 is same with the state(5) to be set 00:49:48.730 [2024-07-22 13:12:07.933809] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1586940 is same with the state(5) to be set 00:49:48.730 [2024-07-22 13:12:07.933817] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1586940 is same with the state(5) to be set 00:49:48.730 [2024-07-22 13:12:07.933825] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1586940 is same with the state(5) to be set 00:49:48.730 [2024-07-22 13:12:07.933833] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1586940 is same with the state(5) to be set 00:49:48.730 [2024-07-22 13:12:07.933841] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1586940 is same with the state(5) to be set 00:49:48.730 [2024-07-22 13:12:07.933849] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1586940 is same with the state(5) to be set 00:49:48.730 [2024-07-22 13:12:07.933857] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1586940 is same with the state(5) to be set 00:49:48.730 [2024-07-22 13:12:07.933865] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1586940 is same with the state(5) to be set 00:49:48.730 [2024-07-22 13:12:07.933873] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1586940 is same with the state(5) to be set 00:49:48.730 [2024-07-22 13:12:07.933880] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1586940 is same with the state(5) to be set 00:49:48.730 [2024-07-22 13:12:07.933888] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1586940 is same with the state(5) to be set 00:49:48.730 [2024-07-22 13:12:07.933896] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1586940 is same with the state(5) to be set 00:49:48.730 [2024-07-22 13:12:07.933904] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1586940 is same with the state(5) to be set 00:49:48.730 [2024-07-22 13:12:07.933913] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1586940 is same with the state(5) to be set 00:49:48.730 [2024-07-22 13:12:07.933921] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1586940 is same with the state(5) to be set 00:49:48.730 [2024-07-22 13:12:07.933929] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1586940 is same with the state(5) to be set 00:49:48.730 [2024-07-22 13:12:07.933937] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1586940 is same with the state(5) to be set 00:49:48.731 [2024-07-22 13:12:07.933946] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1586940 is same with the state(5) to be set 00:49:48.731 [2024-07-22 13:12:07.934370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:130464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:48.731 [2024-07-22 13:12:07.934405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.731 [2024-07-22 13:12:07.934430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:130472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:48.731 [2024-07-22 13:12:07.934441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.731 [2024-07-22 13:12:07.934453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:130496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:48.731 [2024-07-22 13:12:07.934462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.731 [2024-07-22 13:12:07.934473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:130512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:48.731 [2024-07-22 13:12:07.934483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.731 [2024-07-22 13:12:07.934494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:130536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:48.731 [2024-07-22 13:12:07.934503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.731 [2024-07-22 13:12:07.934514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:130544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:48.731 [2024-07-22 13:12:07.934524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.731 [2024-07-22 13:12:07.934535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:130552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:48.731 [2024-07-22 13:12:07.934544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.731 [2024-07-22 13:12:07.934555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:130576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:48.731 [2024-07-22 13:12:07.934564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.731 [2024-07-22 13:12:07.934575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:40 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:48.731 [2024-07-22 13:12:07.934584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.731 [2024-07-22 13:12:07.934595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:56 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:48.731 [2024-07-22 13:12:07.934604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.731 [2024-07-22 13:12:07.934615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:64 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:48.731 [2024-07-22 13:12:07.934636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.731 [2024-07-22 13:12:07.934648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:72 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:48.731 [2024-07-22 13:12:07.934657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.731 [2024-07-22 13:12:07.934668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:88 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:48.731 [2024-07-22 13:12:07.934677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.731 [2024-07-22 13:12:07.934688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:96 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:48.731 [2024-07-22 13:12:07.934697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.731 [2024-07-22 13:12:07.934708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:48.731 [2024-07-22 13:12:07.934717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.731 [2024-07-22 13:12:07.934728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:48.731 [2024-07-22 13:12:07.934737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.731 [2024-07-22 13:12:07.934748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:48.731 [2024-07-22 13:12:07.934760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.731 [2024-07-22 13:12:07.934772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:48.731 [2024-07-22 13:12:07.934781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.731 [2024-07-22 13:12:07.934792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:48.731 [2024-07-22 13:12:07.934801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.731 [2024-07-22 13:12:07.934812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:48.731 [2024-07-22 13:12:07.934821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.731 [2024-07-22 13:12:07.934832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:48.731 [2024-07-22 13:12:07.934842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.731 [2024-07-22 13:12:07.934853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:48.731 [2024-07-22 13:12:07.934862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.731 [2024-07-22 13:12:07.934873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:48.731 [2024-07-22 13:12:07.934882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.731 [2024-07-22 13:12:07.934893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:48.731 [2024-07-22 13:12:07.934902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.731 [2024-07-22 13:12:07.934913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:48.731 [2024-07-22 13:12:07.934922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.731 [2024-07-22 13:12:07.934937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:130608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:48.731 [2024-07-22 13:12:07.934957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.731 [2024-07-22 13:12:07.934968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:130632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:48.731 [2024-07-22 13:12:07.934977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.731 [2024-07-22 13:12:07.934988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:130648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:48.731 [2024-07-22 13:12:07.934997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.731 [2024-07-22 13:12:07.935008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:130664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:48.731 [2024-07-22 13:12:07.935018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.731 [2024-07-22 13:12:07.935029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:130672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:48.731 [2024-07-22 13:12:07.935037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.731 [2024-07-22 13:12:07.935049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:130688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:48.731 [2024-07-22 13:12:07.935057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.731 [2024-07-22 13:12:07.935068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:130720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:48.731 [2024-07-22 13:12:07.935077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.731 [2024-07-22 13:12:07.935088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:130744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:48.731 [2024-07-22 13:12:07.935099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.731 [2024-07-22 13:12:07.935110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:130752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:48.731 [2024-07-22 13:12:07.935119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.731 [2024-07-22 13:12:07.935131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:130760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:48.731 [2024-07-22 13:12:07.935151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.731 [2024-07-22 13:12:07.935164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:130768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:48.731 [2024-07-22 13:12:07.935173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.731 [2024-07-22 13:12:07.935184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:130776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:48.731 [2024-07-22 13:12:07.935194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.731 [2024-07-22 13:12:07.935205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:130800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:48.731 [2024-07-22 13:12:07.935214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.732 [2024-07-22 13:12:07.935225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:130808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:48.732 [2024-07-22 13:12:07.935234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.732 [2024-07-22 13:12:07.935245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:130816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:48.732 [2024-07-22 13:12:07.935254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.732 [2024-07-22 13:12:07.935265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:130824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:48.732 [2024-07-22 13:12:07.935274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.732 [2024-07-22 13:12:07.935285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:48.732 [2024-07-22 13:12:07.935295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.732 [2024-07-22 13:12:07.935306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:48.732 [2024-07-22 13:12:07.935315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.732 [2024-07-22 13:12:07.935326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:48.732 [2024-07-22 13:12:07.935335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.732 [2024-07-22 13:12:07.935346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:48.732 [2024-07-22 13:12:07.935355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.732 [2024-07-22 13:12:07.935366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:48.732 [2024-07-22 13:12:07.935376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.732 [2024-07-22 13:12:07.935387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:48.732 [2024-07-22 13:12:07.935396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.732 [2024-07-22 13:12:07.935407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:48.732 [2024-07-22 13:12:07.935416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.732 [2024-07-22 13:12:07.935428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:48.732 [2024-07-22 13:12:07.935437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.732 [2024-07-22 13:12:07.935449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:48.732 [2024-07-22 13:12:07.935458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.732 [2024-07-22 13:12:07.935469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:48.732 [2024-07-22 13:12:07.935479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.732 [2024-07-22 13:12:07.935491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:48.732 [2024-07-22 13:12:07.935500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.732 [2024-07-22 13:12:07.935511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:48.732 [2024-07-22 13:12:07.935520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.732 [2024-07-22 13:12:07.935531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:48.732 [2024-07-22 13:12:07.935541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.732 [2024-07-22 13:12:07.935551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:48.732 [2024-07-22 13:12:07.935561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.732 [2024-07-22 13:12:07.935571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:48.732 [2024-07-22 13:12:07.935580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.732 [2024-07-22 13:12:07.935591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:48.732 [2024-07-22 13:12:07.935600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.732 [2024-07-22 13:12:07.935611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:48.732 [2024-07-22 13:12:07.935621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.732 [2024-07-22 13:12:07.935632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:48.732 [2024-07-22 13:12:07.935641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.732 [2024-07-22 13:12:07.935652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:48.732 [2024-07-22 13:12:07.935661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.732 [2024-07-22 13:12:07.935672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:48.732 [2024-07-22 13:12:07.935681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.732 [2024-07-22 13:12:07.935692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:48.732 [2024-07-22 13:12:07.935701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.732 [2024-07-22 13:12:07.935712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:48.732 [2024-07-22 13:12:07.935722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.732 [2024-07-22 13:12:07.935733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:48.732 [2024-07-22 13:12:07.935742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.732 [2024-07-22 13:12:07.935753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:48.732 [2024-07-22 13:12:07.935764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.732 [2024-07-22 13:12:07.935776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:48.732 [2024-07-22 13:12:07.935785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.732 [2024-07-22 13:12:07.935796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:48.732 [2024-07-22 13:12:07.935813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.732 [2024-07-22 13:12:07.935824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:48.732 [2024-07-22 13:12:07.935834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.732 [2024-07-22 13:12:07.935844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:48.732 [2024-07-22 13:12:07.935854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.732 [2024-07-22 13:12:07.935867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:48.732 [2024-07-22 13:12:07.935877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.732 [2024-07-22 13:12:07.935888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:48.732 [2024-07-22 13:12:07.935897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.732 [2024-07-22 13:12:07.935908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:48.732 [2024-07-22 13:12:07.935918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.732 [2024-07-22 13:12:07.935929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:48.732 [2024-07-22 13:12:07.935938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.732 [2024-07-22 13:12:07.935949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:130840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:48.732 [2024-07-22 13:12:07.935958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.732 [2024-07-22 13:12:07.935969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:130864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:48.732 [2024-07-22 13:12:07.935978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.732 [2024-07-22 13:12:07.935989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:130888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:48.732 [2024-07-22 13:12:07.935998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.733 [2024-07-22 13:12:07.936009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:130904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:48.733 [2024-07-22 13:12:07.936018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.733 [2024-07-22 13:12:07.936029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:130912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:48.733 [2024-07-22 13:12:07.936038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.733 [2024-07-22 13:12:07.936049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:130936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:48.733 [2024-07-22 13:12:07.936059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.733 [2024-07-22 13:12:07.936069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:130944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:48.733 [2024-07-22 13:12:07.936079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.733 [2024-07-22 13:12:07.936092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:130952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:48.733 [2024-07-22 13:12:07.936102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.733 [2024-07-22 13:12:07.936114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:130968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:48.733 [2024-07-22 13:12:07.936123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.733 [2024-07-22 13:12:07.936144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:130976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:48.733 [2024-07-22 13:12:07.936155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.733 [2024-07-22 13:12:07.936166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:130984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:48.733 [2024-07-22 13:12:07.936176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.733 [2024-07-22 13:12:07.936187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:131016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:48.733 [2024-07-22 13:12:07.936196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.733 [2024-07-22 13:12:07.936207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:131032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:48.733 [2024-07-22 13:12:07.936216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.733 [2024-07-22 13:12:07.936228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:131040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:48.733 [2024-07-22 13:12:07.936252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.733 [2024-07-22 13:12:07.936264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:131048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:48.733 [2024-07-22 13:12:07.936273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.733 [2024-07-22 13:12:07.936285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:0 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:48.733 [2024-07-22 13:12:07.936302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.733 [2024-07-22 13:12:07.936313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:48.733 [2024-07-22 13:12:07.936323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.733 [2024-07-22 13:12:07.936334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:24 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:48.733 [2024-07-22 13:12:07.936343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.733 [2024-07-22 13:12:07.936355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:48.733 [2024-07-22 13:12:07.936364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.733 [2024-07-22 13:12:07.936375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:48 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:48.733 [2024-07-22 13:12:07.936384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.733 [2024-07-22 13:12:07.936395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:80 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:48.733 [2024-07-22 13:12:07.936404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.733 [2024-07-22 13:12:07.936415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:48.733 [2024-07-22 13:12:07.936424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.733 [2024-07-22 13:12:07.936435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:48.733 [2024-07-22 13:12:07.936444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.733 [2024-07-22 13:12:07.936455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:48.733 [2024-07-22 13:12:07.936471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.733 [2024-07-22 13:12:07.936483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:48.733 [2024-07-22 13:12:07.936492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.733 [2024-07-22 13:12:07.936504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:48.733 [2024-07-22 13:12:07.936513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.733 [2024-07-22 13:12:07.936524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:48.733 [2024-07-22 13:12:07.936533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.733 [2024-07-22 13:12:07.936544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:48.733 [2024-07-22 13:12:07.936554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.733 [2024-07-22 13:12:07.936564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:48.733 [2024-07-22 13:12:07.936573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.733 [2024-07-22 13:12:07.936584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:48.733 [2024-07-22 13:12:07.936593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.733 [2024-07-22 13:12:07.936604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:48.733 [2024-07-22 13:12:07.936613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.733 [2024-07-22 13:12:07.936624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:48.733 [2024-07-22 13:12:07.936633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.733 [2024-07-22 13:12:07.936644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:48.733 [2024-07-22 13:12:07.936653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.733 [2024-07-22 13:12:07.936663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:48.733 [2024-07-22 13:12:07.936673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.733 [2024-07-22 13:12:07.936683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:48.733 [2024-07-22 13:12:07.936692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.733 [2024-07-22 13:12:07.936703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:48.733 [2024-07-22 13:12:07.936712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.733 [2024-07-22 13:12:07.936723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:48.733 [2024-07-22 13:12:07.936732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.733 [2024-07-22 13:12:07.936743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:48.733 [2024-07-22 13:12:07.936752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.733 [2024-07-22 13:12:07.936764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:48.733 [2024-07-22 13:12:07.936773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.733 [2024-07-22 13:12:07.936785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:48.733 [2024-07-22 13:12:07.936799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.733 [2024-07-22 13:12:07.936810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:48.733 [2024-07-22 13:12:07.936819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.733 [2024-07-22 13:12:07.936830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:48.733 [2024-07-22 13:12:07.936839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.734 [2024-07-22 13:12:07.936856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:48.734 [2024-07-22 13:12:07.936865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.734 [2024-07-22 13:12:07.936876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:48.734 [2024-07-22 13:12:07.936885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.734 [2024-07-22 13:12:07.936896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:48.734 [2024-07-22 13:12:07.936906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.734 [2024-07-22 13:12:07.936917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:48.734 [2024-07-22 13:12:07.936926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.734 [2024-07-22 13:12:07.936938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:48.734 [2024-07-22 13:12:07.936947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.734 [2024-07-22 13:12:07.936959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:48.734 [2024-07-22 13:12:07.936968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.734 [2024-07-22 13:12:07.936979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:48.734 [2024-07-22 13:12:07.936988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.734 [2024-07-22 13:12:07.936999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:48.734 [2024-07-22 13:12:07.937013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.734 [2024-07-22 13:12:07.937024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:48.734 [2024-07-22 13:12:07.937033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.734 [2024-07-22 13:12:07.937044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:48.734 [2024-07-22 13:12:07.937053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.734 [2024-07-22 13:12:07.937064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:48.734 [2024-07-22 13:12:07.937073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.734 [2024-07-22 13:12:07.937084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:48.734 [2024-07-22 13:12:07.937093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.734 [2024-07-22 13:12:07.937103] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de9860 is same with the state(5) to be set 00:49:48.734 [2024-07-22 13:12:07.937116] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:49:48.734 [2024-07-22 13:12:07.937123] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:49:48.734 [2024-07-22 13:12:07.937151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:304 len:8 PRP1 0x0 PRP2 0x0 00:49:48.734 [2024-07-22 13:12:07.937162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:48.734 [2024-07-22 13:12:07.937226] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1de9860 was disconnected and freed. reset controller. 00:49:48.734 [2024-07-22 13:12:07.937458] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:48.734 [2024-07-22 13:12:07.937548] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dde4f0 (9): Bad file descriptor 00:49:48.734 [2024-07-22 13:12:07.937658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:48.734 [2024-07-22 13:12:07.937706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:48.734 [2024-07-22 13:12:07.937722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dde4f0 with addr=10.0.0.2, port=4420 00:49:48.734 [2024-07-22 13:12:07.937732] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dde4f0 is same with the state(5) to be set 00:49:48.734 [2024-07-22 13:12:07.937750] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dde4f0 (9): Bad file descriptor 00:49:48.734 [2024-07-22 13:12:07.937766] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:48.734 [2024-07-22 13:12:07.937775] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:48.734 [2024-07-22 13:12:07.937786] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:48.734 [2024-07-22 13:12:07.937806] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:48.734 [2024-07-22 13:12:07.937817] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:48.734 13:12:07 -- host/timeout.sh@90 -- # sleep 1 00:49:49.669 [2024-07-22 13:12:08.937909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:49.669 [2024-07-22 13:12:08.938000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:49.669 [2024-07-22 13:12:08.938018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dde4f0 with addr=10.0.0.2, port=4420 00:49:49.669 [2024-07-22 13:12:08.938030] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dde4f0 is same with the state(5) to be set 00:49:49.669 [2024-07-22 13:12:08.938051] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dde4f0 (9): Bad file descriptor 00:49:49.669 [2024-07-22 13:12:08.938068] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:49.669 [2024-07-22 13:12:08.938076] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:49.669 [2024-07-22 13:12:08.938085] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:49.669 [2024-07-22 13:12:08.938108] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:49.669 [2024-07-22 13:12:08.938119] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:49.669 13:12:08 -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:49:49.928 [2024-07-22 13:12:09.159299] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:49:49.928 13:12:09 -- host/timeout.sh@92 -- # wait 99851 00:49:50.863 [2024-07-22 13:12:09.954017] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:49:57.451 00:49:57.452 Latency(us) 00:49:57.452 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:49:57.452 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:49:57.452 Verification LBA range: start 0x0 length 0x4000 00:49:57.452 NVMe0n1 : 10.01 10040.87 39.22 0.00 0.00 12725.87 1154.33 3019898.88 00:49:57.452 =================================================================================================================== 00:49:57.452 Total : 10040.87 39.22 0.00 0.00 12725.87 1154.33 3019898.88 00:49:57.452 0 00:49:57.452 13:12:16 -- host/timeout.sh@97 -- # rpc_pid=99973 00:49:57.452 13:12:16 -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:49:57.452 13:12:16 -- host/timeout.sh@98 -- # sleep 1 00:49:57.710 Running I/O for 10 seconds... 00:49:58.648 13:12:17 -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:49:58.648 [2024-07-22 13:12:18.049340] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e35f0 is same with the state(5) to be set 00:49:58.648 [2024-07-22 13:12:18.049392] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e35f0 is same with the state(5) to be set 00:49:58.648 [2024-07-22 13:12:18.049405] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e35f0 is same with the state(5) to be set 00:49:58.648 [2024-07-22 13:12:18.049414] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e35f0 is same with the state(5) to be set 00:49:58.648 [2024-07-22 13:12:18.049424] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e35f0 is same with the state(5) to be set 00:49:58.648 [2024-07-22 13:12:18.049433] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e35f0 is same with the state(5) to be set 00:49:58.648 [2024-07-22 13:12:18.049441] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e35f0 is same with the state(5) to be set 00:49:58.648 [2024-07-22 13:12:18.049449] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e35f0 is same with the state(5) to be set 00:49:58.648 [2024-07-22 13:12:18.049457] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e35f0 is same with the state(5) to be set 00:49:58.648 [2024-07-22 13:12:18.049466] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e35f0 is same with the state(5) to be set 00:49:58.648 [2024-07-22 13:12:18.049474] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e35f0 is same with the state(5) to be set 00:49:58.648 [2024-07-22 13:12:18.049483] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e35f0 is same with the state(5) to be set 00:49:58.648 [2024-07-22 13:12:18.049491] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e35f0 is same with the state(5) to be set 00:49:58.648 [2024-07-22 13:12:18.049499] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e35f0 is same with the state(5) to be set 00:49:58.648 [2024-07-22 13:12:18.049507] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e35f0 is same with the state(5) to be set 00:49:58.648 [2024-07-22 13:12:18.049530] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e35f0 is same with the state(5) to be set 00:49:58.648 [2024-07-22 13:12:18.049552] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e35f0 is same with the state(5) to be set 00:49:58.648 [2024-07-22 13:12:18.049560] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e35f0 is same with the state(5) to be set 00:49:58.648 [2024-07-22 13:12:18.049582] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e35f0 is same with the state(5) to be set 00:49:58.648 [2024-07-22 13:12:18.049590] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e35f0 is same with the state(5) to be set 00:49:58.648 [2024-07-22 13:12:18.049597] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e35f0 is same with the state(5) to be set 00:49:58.648 [2024-07-22 13:12:18.049604] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e35f0 is same with the state(5) to be set 00:49:58.648 [2024-07-22 13:12:18.049611] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e35f0 is same with the state(5) to be set 00:49:58.648 [2024-07-22 13:12:18.049618] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e35f0 is same with the state(5) to be set 00:49:58.648 [2024-07-22 13:12:18.049641] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e35f0 is same with the state(5) to be set 00:49:58.648 [2024-07-22 13:12:18.049664] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e35f0 is same with the state(5) to be set 00:49:58.649 [2024-07-22 13:12:18.049687] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e35f0 is same with the state(5) to be set 00:49:58.649 [2024-07-22 13:12:18.049694] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e35f0 is same with the state(5) to be set 00:49:58.649 [2024-07-22 13:12:18.049702] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e35f0 is same with the state(5) to be set 00:49:58.649 [2024-07-22 13:12:18.049710] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e35f0 is same with the state(5) to be set 00:49:58.649 [2024-07-22 13:12:18.049719] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e35f0 is same with the state(5) to be set 00:49:58.649 [2024-07-22 13:12:18.049727] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e35f0 is same with the state(5) to be set 00:49:58.649 [2024-07-22 13:12:18.049736] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e35f0 is same with the state(5) to be set 00:49:58.649 [2024-07-22 13:12:18.049744] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e35f0 is same with the state(5) to be set 00:49:58.649 [2024-07-22 13:12:18.049753] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e35f0 is same with the state(5) to be set 00:49:58.649 [2024-07-22 13:12:18.049761] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e35f0 is same with the state(5) to be set 00:49:58.649 [2024-07-22 13:12:18.049778] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e35f0 is same with the state(5) to be set 00:49:58.649 [2024-07-22 13:12:18.049786] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e35f0 is same with the state(5) to be set 00:49:58.649 [2024-07-22 13:12:18.049795] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e35f0 is same with the state(5) to be set 00:49:58.649 [2024-07-22 13:12:18.049805] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e35f0 is same with the state(5) to be set 00:49:58.649 [2024-07-22 13:12:18.049813] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e35f0 is same with the state(5) to be set 00:49:58.649 [2024-07-22 13:12:18.049821] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e35f0 is same with the state(5) to be set 00:49:58.649 [2024-07-22 13:12:18.049829] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e35f0 is same with the state(5) to be set 00:49:58.649 [2024-07-22 13:12:18.049837] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e35f0 is same with the state(5) to be set 00:49:58.649 [2024-07-22 13:12:18.049844] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e35f0 is same with the state(5) to be set 00:49:58.649 [2024-07-22 13:12:18.049852] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e35f0 is same with the state(5) to be set 00:49:58.649 [2024-07-22 13:12:18.049860] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e35f0 is same with the state(5) to be set 00:49:58.649 [2024-07-22 13:12:18.049868] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e35f0 is same with the state(5) to be set 00:49:58.649 [2024-07-22 13:12:18.049876] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e35f0 is same with the state(5) to be set 00:49:58.649 [2024-07-22 13:12:18.049884] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e35f0 is same with the state(5) to be set 00:49:58.649 [2024-07-22 13:12:18.049892] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e35f0 is same with the state(5) to be set 00:49:58.649 [2024-07-22 13:12:18.049899] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e35f0 is same with the state(5) to be set 00:49:58.649 [2024-07-22 13:12:18.049907] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e35f0 is same with the state(5) to be set 00:49:58.649 [2024-07-22 13:12:18.049930] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e35f0 is same with the state(5) to be set 00:49:58.649 [2024-07-22 13:12:18.049938] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e35f0 is same with the state(5) to be set 00:49:58.649 [2024-07-22 13:12:18.049946] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e35f0 is same with the state(5) to be set 00:49:58.649 [2024-07-22 13:12:18.049954] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e35f0 is same with the state(5) to be set 00:49:58.649 [2024-07-22 13:12:18.049961] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e35f0 is same with the state(5) to be set 00:49:58.649 [2024-07-22 13:12:18.049970] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e35f0 is same with the state(5) to be set 00:49:58.649 [2024-07-22 13:12:18.049978] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e35f0 is same with the state(5) to be set 00:49:58.649 [2024-07-22 13:12:18.049986] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e35f0 is same with the state(5) to be set 00:49:58.649 [2024-07-22 13:12:18.049994] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e35f0 is same with the state(5) to be set 00:49:58.649 [2024-07-22 13:12:18.050001] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e35f0 is same with the state(5) to be set 00:49:58.649 [2024-07-22 13:12:18.050010] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e35f0 is same with the state(5) to be set 00:49:58.649 [2024-07-22 13:12:18.050018] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e35f0 is same with the state(5) to be set 00:49:58.649 [2024-07-22 13:12:18.050026] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e35f0 is same with the state(5) to be set 00:49:58.649 [2024-07-22 13:12:18.050034] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e35f0 is same with the state(5) to be set 00:49:58.649 [2024-07-22 13:12:18.050043] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e35f0 is same with the state(5) to be set 00:49:58.649 [2024-07-22 13:12:18.050051] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e35f0 is same with the state(5) to be set 00:49:58.649 [2024-07-22 13:12:18.050059] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e35f0 is same with the state(5) to be set 00:49:58.649 [2024-07-22 13:12:18.050067] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e35f0 is same with the state(5) to be set 00:49:58.649 [2024-07-22 13:12:18.050075] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e35f0 is same with the state(5) to be set 00:49:58.649 [2024-07-22 13:12:18.050084] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e35f0 is same with the state(5) to be set 00:49:58.649 [2024-07-22 13:12:18.050092] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e35f0 is same with the state(5) to be set 00:49:58.649 [2024-07-22 13:12:18.050100] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e35f0 is same with the state(5) to be set 00:49:58.649 [2024-07-22 13:12:18.050108] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e35f0 is same with the state(5) to be set 00:49:58.649 [2024-07-22 13:12:18.050116] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e35f0 is same with the state(5) to be set 00:49:58.649 [2024-07-22 13:12:18.050124] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e35f0 is same with the state(5) to be set 00:49:58.649 [2024-07-22 13:12:18.050132] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e35f0 is same with the state(5) to be set 00:49:58.649 [2024-07-22 13:12:18.050546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:122600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:58.649 [2024-07-22 13:12:18.050584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.649 [2024-07-22 13:12:18.050607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:122616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:58.649 [2024-07-22 13:12:18.050627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.649 [2024-07-22 13:12:18.050664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:122624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:58.649 [2024-07-22 13:12:18.050674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.649 [2024-07-22 13:12:18.050685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:122632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:58.649 [2024-07-22 13:12:18.050694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.649 [2024-07-22 13:12:18.050706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:122640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:58.649 [2024-07-22 13:12:18.050715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.649 [2024-07-22 13:12:18.050726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:122648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:58.649 [2024-07-22 13:12:18.050735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.649 [2024-07-22 13:12:18.050746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:122696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:58.649 [2024-07-22 13:12:18.050755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.649 [2024-07-22 13:12:18.050766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:122704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:58.649 [2024-07-22 13:12:18.050776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.649 [2024-07-22 13:12:18.050787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:121944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:58.649 [2024-07-22 13:12:18.050796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.649 [2024-07-22 13:12:18.050807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:121952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:58.649 [2024-07-22 13:12:18.050817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.649 [2024-07-22 13:12:18.050828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:121960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:58.649 [2024-07-22 13:12:18.050837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.649 [2024-07-22 13:12:18.050848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:121968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:58.649 [2024-07-22 13:12:18.050857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.649 [2024-07-22 13:12:18.050868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:121976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:58.649 [2024-07-22 13:12:18.050877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.649 [2024-07-22 13:12:18.050888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:121984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:58.649 [2024-07-22 13:12:18.050897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.650 [2024-07-22 13:12:18.050908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:121992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:58.650 [2024-07-22 13:12:18.050917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.650 [2024-07-22 13:12:18.050928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:122008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:58.650 [2024-07-22 13:12:18.050937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.650 [2024-07-22 13:12:18.050948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:122016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:58.650 [2024-07-22 13:12:18.050959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.650 [2024-07-22 13:12:18.050971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:122024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:58.650 [2024-07-22 13:12:18.050986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.650 [2024-07-22 13:12:18.050997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:122040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:58.650 [2024-07-22 13:12:18.051009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.650 [2024-07-22 13:12:18.051020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:122048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:58.650 [2024-07-22 13:12:18.051029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.650 [2024-07-22 13:12:18.051040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:122056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:58.650 [2024-07-22 13:12:18.051049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.650 [2024-07-22 13:12:18.051060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:122072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:58.650 [2024-07-22 13:12:18.051069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.650 [2024-07-22 13:12:18.051086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:122088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:58.650 [2024-07-22 13:12:18.051095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.650 [2024-07-22 13:12:18.051111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:122096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:58.650 [2024-07-22 13:12:18.051120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.650 [2024-07-22 13:12:18.051131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:122120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:58.650 [2024-07-22 13:12:18.051146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.650 [2024-07-22 13:12:18.051170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:122136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:58.650 [2024-07-22 13:12:18.051180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.650 [2024-07-22 13:12:18.051191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:122152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:58.650 [2024-07-22 13:12:18.051201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.650 [2024-07-22 13:12:18.051212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:122184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:58.650 [2024-07-22 13:12:18.051221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.650 [2024-07-22 13:12:18.051232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:122208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:58.650 [2024-07-22 13:12:18.051241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.650 [2024-07-22 13:12:18.051252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:122240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:58.650 [2024-07-22 13:12:18.051261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.650 [2024-07-22 13:12:18.051274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:122248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:58.650 [2024-07-22 13:12:18.051283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.650 [2024-07-22 13:12:18.051294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:122256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:58.650 [2024-07-22 13:12:18.051303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.650 [2024-07-22 13:12:18.051314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:122720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:58.650 [2024-07-22 13:12:18.051324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.650 [2024-07-22 13:12:18.051336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:122728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:58.650 [2024-07-22 13:12:18.051345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.650 [2024-07-22 13:12:18.051356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:122736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:58.650 [2024-07-22 13:12:18.051365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.650 [2024-07-22 13:12:18.051377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:122744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:58.650 [2024-07-22 13:12:18.051385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.650 [2024-07-22 13:12:18.051397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:122752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:58.650 [2024-07-22 13:12:18.051406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.650 [2024-07-22 13:12:18.051417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:122760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:58.650 [2024-07-22 13:12:18.051426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.650 [2024-07-22 13:12:18.051437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:122768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:58.650 [2024-07-22 13:12:18.051446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.650 [2024-07-22 13:12:18.051457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:122776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:58.650 [2024-07-22 13:12:18.051471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.650 [2024-07-22 13:12:18.051482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:122784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:58.650 [2024-07-22 13:12:18.051500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.650 [2024-07-22 13:12:18.051512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:122792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:58.650 [2024-07-22 13:12:18.051521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.650 [2024-07-22 13:12:18.051532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:122800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:58.650 [2024-07-22 13:12:18.051541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.650 [2024-07-22 13:12:18.051552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:122808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:58.650 [2024-07-22 13:12:18.051561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.650 [2024-07-22 13:12:18.051572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:122816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:58.650 [2024-07-22 13:12:18.051581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.650 [2024-07-22 13:12:18.051593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:122296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:58.650 [2024-07-22 13:12:18.051602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.650 [2024-07-22 13:12:18.051613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:122320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:58.650 [2024-07-22 13:12:18.051622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.650 [2024-07-22 13:12:18.051633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:122328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:58.650 [2024-07-22 13:12:18.051642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.650 [2024-07-22 13:12:18.051653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:122336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:58.650 [2024-07-22 13:12:18.051663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.650 [2024-07-22 13:12:18.051674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:122344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:58.650 [2024-07-22 13:12:18.051683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.650 [2024-07-22 13:12:18.051694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:122392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:58.650 [2024-07-22 13:12:18.051703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.650 [2024-07-22 13:12:18.051714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:122408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:58.650 [2024-07-22 13:12:18.051723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.650 [2024-07-22 13:12:18.051734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:122472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:58.650 [2024-07-22 13:12:18.051743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.650 [2024-07-22 13:12:18.051754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:122824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:58.651 [2024-07-22 13:12:18.051763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.651 [2024-07-22 13:12:18.051774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:58.651 [2024-07-22 13:12:18.051783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.651 [2024-07-22 13:12:18.051794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:122840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:58.651 [2024-07-22 13:12:18.051803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.651 [2024-07-22 13:12:18.051813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:122848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:58.651 [2024-07-22 13:12:18.051827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.651 [2024-07-22 13:12:18.051839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:122856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:58.651 [2024-07-22 13:12:18.051848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.651 [2024-07-22 13:12:18.051860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:122864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:58.651 [2024-07-22 13:12:18.051869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.651 [2024-07-22 13:12:18.051880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:122872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:58.651 [2024-07-22 13:12:18.051889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.651 [2024-07-22 13:12:18.051900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:122880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:58.651 [2024-07-22 13:12:18.051909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.651 [2024-07-22 13:12:18.051920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:122888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:58.651 [2024-07-22 13:12:18.051930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.651 [2024-07-22 13:12:18.051941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:122896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:58.651 [2024-07-22 13:12:18.051951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.651 [2024-07-22 13:12:18.051962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:122904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:58.651 [2024-07-22 13:12:18.051971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.651 [2024-07-22 13:12:18.051982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:122912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:58.651 [2024-07-22 13:12:18.051991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.651 [2024-07-22 13:12:18.052002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:122920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:58.651 [2024-07-22 13:12:18.052011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.651 [2024-07-22 13:12:18.052021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:122928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:58.651 [2024-07-22 13:12:18.052030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.651 [2024-07-22 13:12:18.052041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:122936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:58.651 [2024-07-22 13:12:18.052050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.651 [2024-07-22 13:12:18.052062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:122944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:58.651 [2024-07-22 13:12:18.052071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.651 [2024-07-22 13:12:18.052082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:122952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:58.651 [2024-07-22 13:12:18.052091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.651 [2024-07-22 13:12:18.052102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:122960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:58.651 [2024-07-22 13:12:18.052111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.651 [2024-07-22 13:12:18.052123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:122968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:58.651 [2024-07-22 13:12:18.052132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.651 [2024-07-22 13:12:18.052153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:122976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:58.651 [2024-07-22 13:12:18.052163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.651 [2024-07-22 13:12:18.052174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:122984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:58.651 [2024-07-22 13:12:18.052184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.651 [2024-07-22 13:12:18.052195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:122992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:58.651 [2024-07-22 13:12:18.052205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.651 [2024-07-22 13:12:18.052216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:122504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:58.651 [2024-07-22 13:12:18.052225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.651 [2024-07-22 13:12:18.052236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:122512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:58.651 [2024-07-22 13:12:18.052245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.651 [2024-07-22 13:12:18.052256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:122520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:58.651 [2024-07-22 13:12:18.052265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.651 [2024-07-22 13:12:18.052277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:122528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:58.651 [2024-07-22 13:12:18.052286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.651 [2024-07-22 13:12:18.052297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:122552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:58.651 [2024-07-22 13:12:18.052307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.651 [2024-07-22 13:12:18.052318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:122560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:58.651 [2024-07-22 13:12:18.052329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.651 [2024-07-22 13:12:18.052340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:122568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:58.651 [2024-07-22 13:12:18.052349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.651 [2024-07-22 13:12:18.052360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:122576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:58.651 [2024-07-22 13:12:18.052369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.651 [2024-07-22 13:12:18.052380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:123000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:58.651 [2024-07-22 13:12:18.052389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.651 [2024-07-22 13:12:18.052400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:123008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:58.651 [2024-07-22 13:12:18.052409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.651 [2024-07-22 13:12:18.052420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:123016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:58.651 [2024-07-22 13:12:18.052436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.651 [2024-07-22 13:12:18.052447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:123024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:58.651 [2024-07-22 13:12:18.052456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.651 [2024-07-22 13:12:18.052467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:123032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:58.651 [2024-07-22 13:12:18.052476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.651 [2024-07-22 13:12:18.052486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:123040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:58.651 [2024-07-22 13:12:18.052495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.651 [2024-07-22 13:12:18.052506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:123048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:58.651 [2024-07-22 13:12:18.052515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.651 [2024-07-22 13:12:18.052526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:123056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:58.651 [2024-07-22 13:12:18.052535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.651 [2024-07-22 13:12:18.052546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:123064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:58.651 [2024-07-22 13:12:18.052555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.651 [2024-07-22 13:12:18.052566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:123072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:58.651 [2024-07-22 13:12:18.052575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.652 [2024-07-22 13:12:18.052586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:123080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:58.652 [2024-07-22 13:12:18.052596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.652 [2024-07-22 13:12:18.052611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:123088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:58.652 [2024-07-22 13:12:18.052628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.652 [2024-07-22 13:12:18.052639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:123096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:58.652 [2024-07-22 13:12:18.052648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.652 [2024-07-22 13:12:18.052659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:123104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:58.652 [2024-07-22 13:12:18.052668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.652 [2024-07-22 13:12:18.052679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:123112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:58.652 [2024-07-22 13:12:18.052688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.652 [2024-07-22 13:12:18.052699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:123120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:58.652 [2024-07-22 13:12:18.052707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.652 [2024-07-22 13:12:18.052724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:123128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:58.652 [2024-07-22 13:12:18.052733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.652 [2024-07-22 13:12:18.052744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:123136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:58.652 [2024-07-22 13:12:18.052753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.652 [2024-07-22 13:12:18.052765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:123144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:58.652 [2024-07-22 13:12:18.052774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.652 [2024-07-22 13:12:18.052786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:123152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:58.652 [2024-07-22 13:12:18.052794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.652 [2024-07-22 13:12:18.052805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:123160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:58.652 [2024-07-22 13:12:18.052814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.652 [2024-07-22 13:12:18.052825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:123168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:58.652 [2024-07-22 13:12:18.052835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.652 [2024-07-22 13:12:18.052846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:123176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:58.652 [2024-07-22 13:12:18.052857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.652 [2024-07-22 13:12:18.052868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:123184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:58.652 [2024-07-22 13:12:18.052881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.652 [2024-07-22 13:12:18.052892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:123192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:58.652 [2024-07-22 13:12:18.052901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.652 [2024-07-22 13:12:18.052912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:123200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:58.652 [2024-07-22 13:12:18.052929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.652 [2024-07-22 13:12:18.052940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:123208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:58.652 [2024-07-22 13:12:18.052949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.652 [2024-07-22 13:12:18.052965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:123216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:58.652 [2024-07-22 13:12:18.052975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.652 [2024-07-22 13:12:18.052985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:123224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:58.652 [2024-07-22 13:12:18.052995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.652 [2024-07-22 13:12:18.053006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:123232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:58.652 [2024-07-22 13:12:18.053015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.652 [2024-07-22 13:12:18.053026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:123240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:58.652 [2024-07-22 13:12:18.053035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.652 [2024-07-22 13:12:18.053046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:123248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:58.652 [2024-07-22 13:12:18.053055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.652 [2024-07-22 13:12:18.053071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:123256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:58.652 [2024-07-22 13:12:18.053080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.652 [2024-07-22 13:12:18.053091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:123264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:58.652 [2024-07-22 13:12:18.053100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.652 [2024-07-22 13:12:18.053111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:123272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:58.652 [2024-07-22 13:12:18.053119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.652 [2024-07-22 13:12:18.053130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:123280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:58.652 [2024-07-22 13:12:18.053151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.652 [2024-07-22 13:12:18.053163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:123288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:58.652 [2024-07-22 13:12:18.053172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.652 [2024-07-22 13:12:18.053184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:122592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:58.652 [2024-07-22 13:12:18.053193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.652 [2024-07-22 13:12:18.053204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:122608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:58.652 [2024-07-22 13:12:18.053213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.652 [2024-07-22 13:12:18.053224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:122656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:58.652 [2024-07-22 13:12:18.053233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.652 [2024-07-22 13:12:18.053244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:122664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:58.652 [2024-07-22 13:12:18.053254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.652 [2024-07-22 13:12:18.053265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:122672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:58.652 [2024-07-22 13:12:18.053274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.652 [2024-07-22 13:12:18.053291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:122680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:58.652 [2024-07-22 13:12:18.053300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.652 [2024-07-22 13:12:18.053319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:122688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:58.652 [2024-07-22 13:12:18.053329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.652 [2024-07-22 13:12:18.053339] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e197b0 is same with the state(5) to be set 00:49:58.652 [2024-07-22 13:12:18.053351] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:49:58.652 [2024-07-22 13:12:18.053359] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:49:58.652 [2024-07-22 13:12:18.053367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:122712 len:8 PRP1 0x0 PRP2 0x0 00:49:58.652 [2024-07-22 13:12:18.053376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.652 [2024-07-22 13:12:18.053429] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1e197b0 was disconnected and freed. reset controller. 00:49:58.652 [2024-07-22 13:12:18.053667] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:58.652 [2024-07-22 13:12:18.053755] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dde4f0 (9): Bad file descriptor 00:49:58.652 [2024-07-22 13:12:18.053874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:58.652 [2024-07-22 13:12:18.053923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:58.652 [2024-07-22 13:12:18.053939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dde4f0 with addr=10.0.0.2, port=4420 00:49:58.652 [2024-07-22 13:12:18.053950] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dde4f0 is same with the state(5) to be set 00:49:58.652 [2024-07-22 13:12:18.053968] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dde4f0 (9): Bad file descriptor 00:49:58.652 [2024-07-22 13:12:18.053984] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:58.653 [2024-07-22 13:12:18.053994] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:58.653 [2024-07-22 13:12:18.054004] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:58.653 [2024-07-22 13:12:18.054024] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:58.653 [2024-07-22 13:12:18.054034] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:58.911 13:12:18 -- host/timeout.sh@101 -- # sleep 3 00:49:59.846 [2024-07-22 13:12:19.054150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:59.846 [2024-07-22 13:12:19.054257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:59.846 [2024-07-22 13:12:19.054276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dde4f0 with addr=10.0.0.2, port=4420 00:49:59.846 [2024-07-22 13:12:19.054289] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dde4f0 is same with the state(5) to be set 00:49:59.846 [2024-07-22 13:12:19.054334] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dde4f0 (9): Bad file descriptor 00:49:59.846 [2024-07-22 13:12:19.054362] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:59.846 [2024-07-22 13:12:19.054372] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:59.846 [2024-07-22 13:12:19.054381] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:59.846 [2024-07-22 13:12:19.054407] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:59.846 [2024-07-22 13:12:19.054418] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:50:00.781 [2024-07-22 13:12:20.054498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:00.781 [2024-07-22 13:12:20.054616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:00.781 [2024-07-22 13:12:20.054664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dde4f0 with addr=10.0.0.2, port=4420 00:50:00.781 [2024-07-22 13:12:20.054676] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dde4f0 is same with the state(5) to be set 00:50:00.781 [2024-07-22 13:12:20.054697] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dde4f0 (9): Bad file descriptor 00:50:00.781 [2024-07-22 13:12:20.054714] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:50:00.781 [2024-07-22 13:12:20.054723] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:50:00.781 [2024-07-22 13:12:20.054733] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:50:00.781 [2024-07-22 13:12:20.054754] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:50:00.781 [2024-07-22 13:12:20.054766] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:50:01.715 [2024-07-22 13:12:21.056981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:01.715 [2024-07-22 13:12:21.057070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:01.715 [2024-07-22 13:12:21.057088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dde4f0 with addr=10.0.0.2, port=4420 00:50:01.715 [2024-07-22 13:12:21.057099] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dde4f0 is same with the state(5) to be set 00:50:01.715 [2024-07-22 13:12:21.057262] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dde4f0 (9): Bad file descriptor 00:50:01.715 [2024-07-22 13:12:21.057503] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:50:01.715 [2024-07-22 13:12:21.057524] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:50:01.715 [2024-07-22 13:12:21.057535] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:50:01.715 [2024-07-22 13:12:21.060046] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:50:01.715 [2024-07-22 13:12:21.060090] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:50:01.715 13:12:21 -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:50:01.973 [2024-07-22 13:12:21.309206] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:50:01.973 13:12:21 -- host/timeout.sh@103 -- # wait 99973 00:50:02.908 [2024-07-22 13:12:22.089039] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:50:08.173 00:50:08.173 Latency(us) 00:50:08.173 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:50:08.173 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:50:08.173 Verification LBA range: start 0x0 length 0x4000 00:50:08.173 NVMe0n1 : 10.01 8210.22 32.07 6032.35 0.00 8972.05 640.47 3019898.88 00:50:08.173 =================================================================================================================== 00:50:08.173 Total : 8210.22 32.07 6032.35 0.00 8972.05 0.00 3019898.88 00:50:08.173 0 00:50:08.173 13:12:26 -- host/timeout.sh@105 -- # killprocess 99803 00:50:08.173 13:12:26 -- common/autotest_common.sh@926 -- # '[' -z 99803 ']' 00:50:08.173 13:12:26 -- common/autotest_common.sh@930 -- # kill -0 99803 00:50:08.173 13:12:26 -- common/autotest_common.sh@931 -- # uname 00:50:08.173 13:12:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:50:08.173 13:12:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 99803 00:50:08.173 killing process with pid 99803 00:50:08.173 Received shutdown signal, test time was about 10.000000 seconds 00:50:08.173 00:50:08.173 Latency(us) 00:50:08.173 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:50:08.173 =================================================================================================================== 00:50:08.173 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:50:08.173 13:12:26 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:50:08.173 13:12:26 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:50:08.173 13:12:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 99803' 00:50:08.173 13:12:26 -- common/autotest_common.sh@945 -- # kill 99803 00:50:08.173 13:12:26 -- common/autotest_common.sh@950 -- # wait 99803 00:50:08.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:50:08.174 13:12:27 -- host/timeout.sh@110 -- # bdevperf_pid=100094 00:50:08.174 13:12:27 -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:50:08.174 13:12:27 -- host/timeout.sh@112 -- # waitforlisten 100094 /var/tmp/bdevperf.sock 00:50:08.174 13:12:27 -- common/autotest_common.sh@819 -- # '[' -z 100094 ']' 00:50:08.174 13:12:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:50:08.174 13:12:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:50:08.174 13:12:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:50:08.174 13:12:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:50:08.174 13:12:27 -- common/autotest_common.sh@10 -- # set +x 00:50:08.174 [2024-07-22 13:12:27.234862] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:50:08.174 [2024-07-22 13:12:27.235339] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100094 ] 00:50:08.174 [2024-07-22 13:12:27.371373] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:50:08.174 [2024-07-22 13:12:27.438870] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:50:09.108 13:12:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:50:09.108 13:12:28 -- common/autotest_common.sh@852 -- # return 0 00:50:09.108 13:12:28 -- host/timeout.sh@116 -- # dtrace_pid=100122 00:50:09.108 13:12:28 -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 100094 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:50:09.108 13:12:28 -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:50:09.108 13:12:28 -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:50:09.367 NVMe0n1 00:50:09.367 13:12:28 -- host/timeout.sh@124 -- # rpc_pid=100176 00:50:09.367 13:12:28 -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:50:09.367 13:12:28 -- host/timeout.sh@125 -- # sleep 1 00:50:09.625 Running I/O for 10 seconds... 00:50:10.559 13:12:29 -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:50:10.819 [2024-07-22 13:12:29.997110] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e62f0 is same with the state(5) to be set 00:50:10.819 [2024-07-22 13:12:29.997200] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e62f0 is same with the state(5) to be set 00:50:10.819 [2024-07-22 13:12:29.997213] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e62f0 is same with the state(5) to be set 00:50:10.819 [2024-07-22 13:12:29.997222] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e62f0 is same with the state(5) to be set 00:50:10.819 [2024-07-22 13:12:29.997230] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e62f0 is same with the state(5) to be set 00:50:10.819 [2024-07-22 13:12:29.997239] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e62f0 is same with the state(5) to be set 00:50:10.819 [2024-07-22 13:12:29.997247] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e62f0 is same with the state(5) to be set 00:50:10.819 [2024-07-22 13:12:29.997256] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e62f0 is same with the state(5) to be set 00:50:10.819 [2024-07-22 13:12:29.997264] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e62f0 is same with the state(5) to be set 00:50:10.819 [2024-07-22 13:12:29.997273] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e62f0 is same with the state(5) to be set 00:50:10.820 [2024-07-22 13:12:29.997281] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e62f0 is same with the state(5) to be set 00:50:10.820 [2024-07-22 13:12:29.997289] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e62f0 is same with the state(5) to be set 00:50:10.820 [2024-07-22 13:12:29.997300] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e62f0 is same with the state(5) to be set 00:50:10.820 [2024-07-22 13:12:29.997308] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e62f0 is same with the state(5) to be set 00:50:10.820 [2024-07-22 13:12:29.997317] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e62f0 is same with the state(5) to be set 00:50:10.820 [2024-07-22 13:12:29.997325] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e62f0 is same with the state(5) to be set 00:50:10.820 [2024-07-22 13:12:29.997333] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e62f0 is same with the state(5) to be set 00:50:10.820 [2024-07-22 13:12:29.997342] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e62f0 is same with the state(5) to be set 00:50:10.820 [2024-07-22 13:12:29.997350] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e62f0 is same with the state(5) to be set 00:50:10.820 [2024-07-22 13:12:29.997358] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e62f0 is same with the state(5) to be set 00:50:10.820 [2024-07-22 13:12:29.997366] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e62f0 is same with the state(5) to be set 00:50:10.820 [2024-07-22 13:12:29.997374] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e62f0 is same with the state(5) to be set 00:50:10.820 [2024-07-22 13:12:29.997382] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e62f0 is same with the state(5) to be set 00:50:10.820 [2024-07-22 13:12:29.997390] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e62f0 is same with the state(5) to be set 00:50:10.820 [2024-07-22 13:12:29.997398] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e62f0 is same with the state(5) to be set 00:50:10.820 [2024-07-22 13:12:29.997405] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e62f0 is same with the state(5) to be set 00:50:10.820 [2024-07-22 13:12:29.997413] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e62f0 is same with the state(5) to be set 00:50:10.820 [2024-07-22 13:12:29.997421] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e62f0 is same with the state(5) to be set 00:50:10.820 [2024-07-22 13:12:29.997429] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e62f0 is same with the state(5) to be set 00:50:10.820 [2024-07-22 13:12:29.997437] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e62f0 is same with the state(5) to be set 00:50:10.820 [2024-07-22 13:12:29.997444] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e62f0 is same with the state(5) to be set 00:50:10.820 [2024-07-22 13:12:29.997452] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e62f0 is same with the state(5) to be set 00:50:10.820 [2024-07-22 13:12:29.997460] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e62f0 is same with the state(5) to be set 00:50:10.820 [2024-07-22 13:12:29.997468] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e62f0 is same with the state(5) to be set 00:50:10.820 [2024-07-22 13:12:29.997476] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e62f0 is same with the state(5) to be set 00:50:10.820 [2024-07-22 13:12:29.997484] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e62f0 is same with the state(5) to be set 00:50:10.820 [2024-07-22 13:12:29.997492] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e62f0 is same with the state(5) to be set 00:50:10.820 [2024-07-22 13:12:29.997500] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e62f0 is same with the state(5) to be set 00:50:10.820 [2024-07-22 13:12:29.997507] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e62f0 is same with the state(5) to be set 00:50:10.820 [2024-07-22 13:12:29.997515] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e62f0 is same with the state(5) to be set 00:50:10.820 [2024-07-22 13:12:29.997523] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e62f0 is same with the state(5) to be set 00:50:10.820 [2024-07-22 13:12:29.997531] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e62f0 is same with the state(5) to be set 00:50:10.820 [2024-07-22 13:12:29.997875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:93440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.820 [2024-07-22 13:12:29.997906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.820 [2024-07-22 13:12:29.997928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:96968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.820 [2024-07-22 13:12:29.997939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.820 [2024-07-22 13:12:29.997952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.820 [2024-07-22 13:12:29.997961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.820 [2024-07-22 13:12:29.997972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:47888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.820 [2024-07-22 13:12:29.997981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.820 [2024-07-22 13:12:29.997992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:64832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.820 [2024-07-22 13:12:29.998001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.820 [2024-07-22 13:12:29.998012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:85688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.820 [2024-07-22 13:12:29.998021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.820 [2024-07-22 13:12:29.998032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:48936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.820 [2024-07-22 13:12:29.998043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.820 [2024-07-22 13:12:29.998054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:53328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.820 [2024-07-22 13:12:29.998063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.820 [2024-07-22 13:12:29.998074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:68040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.820 [2024-07-22 13:12:29.998083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.820 [2024-07-22 13:12:29.998094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:109952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.820 [2024-07-22 13:12:29.998103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.820 [2024-07-22 13:12:29.998114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:18800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.820 [2024-07-22 13:12:29.998132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.820 [2024-07-22 13:12:29.998143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:88976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.820 [2024-07-22 13:12:29.998152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.820 [2024-07-22 13:12:29.998163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:21864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.820 [2024-07-22 13:12:29.998185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.820 [2024-07-22 13:12:29.998199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:115664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.820 [2024-07-22 13:12:29.998208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.820 [2024-07-22 13:12:29.998219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:88120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.820 [2024-07-22 13:12:29.998228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.820 [2024-07-22 13:12:29.998239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:2096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.820 [2024-07-22 13:12:29.998248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.820 [2024-07-22 13:12:29.998259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:36952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.820 [2024-07-22 13:12:29.998271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.820 [2024-07-22 13:12:29.998285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:131008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.820 [2024-07-22 13:12:29.998299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.820 [2024-07-22 13:12:29.998310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:10376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.820 [2024-07-22 13:12:29.998319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.820 [2024-07-22 13:12:29.998330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:51992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.820 [2024-07-22 13:12:29.998339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.820 [2024-07-22 13:12:29.998351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:59496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.820 [2024-07-22 13:12:29.998360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.820 [2024-07-22 13:12:29.998370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:98552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.820 [2024-07-22 13:12:29.998379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.820 [2024-07-22 13:12:29.998390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:56552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.820 [2024-07-22 13:12:29.998399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.820 [2024-07-22 13:12:29.998410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:78504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.821 [2024-07-22 13:12:29.998419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.821 [2024-07-22 13:12:29.998430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:39128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.821 [2024-07-22 13:12:29.998439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.821 [2024-07-22 13:12:29.998450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:43160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.821 [2024-07-22 13:12:29.998460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.821 [2024-07-22 13:12:29.998471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:126296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.821 [2024-07-22 13:12:29.998480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.821 [2024-07-22 13:12:29.998491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:87752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.821 [2024-07-22 13:12:29.998500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.821 [2024-07-22 13:12:29.998512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:88672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.821 [2024-07-22 13:12:29.998520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.821 [2024-07-22 13:12:29.998531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:59720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.821 [2024-07-22 13:12:29.998540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.821 [2024-07-22 13:12:29.998551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:112472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.821 [2024-07-22 13:12:29.998560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.821 [2024-07-22 13:12:29.998571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:43216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.821 [2024-07-22 13:12:29.998581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.821 [2024-07-22 13:12:29.998592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:111448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.821 [2024-07-22 13:12:29.998603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.821 [2024-07-22 13:12:29.998614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:46888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.821 [2024-07-22 13:12:29.998634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.821 [2024-07-22 13:12:29.998650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:36336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.821 [2024-07-22 13:12:29.998660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.821 [2024-07-22 13:12:29.998671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:91056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.821 [2024-07-22 13:12:29.998681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.821 [2024-07-22 13:12:29.998692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:108408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.821 [2024-07-22 13:12:29.998701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.821 [2024-07-22 13:12:29.998712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:47080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.821 [2024-07-22 13:12:29.998720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.821 [2024-07-22 13:12:29.998732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:35240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.821 [2024-07-22 13:12:29.998741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.821 [2024-07-22 13:12:29.998752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:72664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.821 [2024-07-22 13:12:29.998760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.821 [2024-07-22 13:12:29.998771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:34816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.821 [2024-07-22 13:12:29.998780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.821 [2024-07-22 13:12:29.998791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:4768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.821 [2024-07-22 13:12:29.998800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.821 [2024-07-22 13:12:29.998811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:6264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.821 [2024-07-22 13:12:29.998820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.821 [2024-07-22 13:12:29.998831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:118544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.821 [2024-07-22 13:12:29.998840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.821 [2024-07-22 13:12:29.998851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:74688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.821 [2024-07-22 13:12:29.998860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.821 [2024-07-22 13:12:29.998873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:98288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.821 [2024-07-22 13:12:29.998883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.821 [2024-07-22 13:12:29.998894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:110064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.821 [2024-07-22 13:12:29.998903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.821 [2024-07-22 13:12:29.998915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.821 [2024-07-22 13:12:29.998924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.821 [2024-07-22 13:12:29.998935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:71416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.821 [2024-07-22 13:12:29.998944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.821 [2024-07-22 13:12:29.998955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:104976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.821 [2024-07-22 13:12:29.998965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.821 [2024-07-22 13:12:29.998976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:86808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.821 [2024-07-22 13:12:29.998986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.821 [2024-07-22 13:12:29.998997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:52712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.821 [2024-07-22 13:12:29.999007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.821 [2024-07-22 13:12:29.999018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:105328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.821 [2024-07-22 13:12:29.999027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.821 [2024-07-22 13:12:29.999038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:101520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.821 [2024-07-22 13:12:29.999047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.821 [2024-07-22 13:12:29.999058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:112232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.821 [2024-07-22 13:12:29.999067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.821 [2024-07-22 13:12:29.999077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:62672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.821 [2024-07-22 13:12:29.999086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.821 [2024-07-22 13:12:29.999097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:6240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.821 [2024-07-22 13:12:29.999107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.821 [2024-07-22 13:12:29.999118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:107672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.821 [2024-07-22 13:12:29.999127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.821 [2024-07-22 13:12:29.999148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:48104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.821 [2024-07-22 13:12:29.999159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.821 [2024-07-22 13:12:29.999170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:68968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.821 [2024-07-22 13:12:29.999181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.821 [2024-07-22 13:12:29.999192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:119440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.821 [2024-07-22 13:12:29.999201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.821 [2024-07-22 13:12:29.999212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.821 [2024-07-22 13:12:29.999222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.821 [2024-07-22 13:12:29.999245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:68888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.821 [2024-07-22 13:12:29.999254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.822 [2024-07-22 13:12:29.999268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:62736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.822 [2024-07-22 13:12:29.999278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.822 [2024-07-22 13:12:29.999289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:15432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.822 [2024-07-22 13:12:29.999305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.822 [2024-07-22 13:12:29.999316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:57704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.822 [2024-07-22 13:12:29.999325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.822 [2024-07-22 13:12:29.999336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:43280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.822 [2024-07-22 13:12:29.999345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.822 [2024-07-22 13:12:29.999356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:62128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.822 [2024-07-22 13:12:29.999365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.822 [2024-07-22 13:12:29.999376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:34120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.822 [2024-07-22 13:12:29.999385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.822 [2024-07-22 13:12:29.999396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:88040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.822 [2024-07-22 13:12:29.999405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.822 [2024-07-22 13:12:29.999416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:13368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.822 [2024-07-22 13:12:29.999425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.822 [2024-07-22 13:12:29.999435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.822 [2024-07-22 13:12:29.999444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.822 [2024-07-22 13:12:29.999461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:117472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.822 [2024-07-22 13:12:29.999471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.822 [2024-07-22 13:12:29.999482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:96144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.822 [2024-07-22 13:12:29.999491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.822 [2024-07-22 13:12:29.999503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:125088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.822 [2024-07-22 13:12:29.999512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.822 [2024-07-22 13:12:29.999528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:56032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.822 [2024-07-22 13:12:29.999538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.822 [2024-07-22 13:12:29.999549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:116664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.822 [2024-07-22 13:12:29.999558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.822 [2024-07-22 13:12:29.999569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:9456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.822 [2024-07-22 13:12:29.999578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.822 [2024-07-22 13:12:29.999589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:107976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.822 [2024-07-22 13:12:29.999598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.822 [2024-07-22 13:12:29.999609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:45744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.822 [2024-07-22 13:12:29.999618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.822 [2024-07-22 13:12:29.999629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:91184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.822 [2024-07-22 13:12:29.999639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.822 [2024-07-22 13:12:29.999650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:17976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.822 [2024-07-22 13:12:29.999659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.822 [2024-07-22 13:12:29.999670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:32968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.822 [2024-07-22 13:12:29.999680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.822 [2024-07-22 13:12:29.999691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:113424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.822 [2024-07-22 13:12:29.999699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.822 [2024-07-22 13:12:29.999710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:9736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.822 [2024-07-22 13:12:29.999719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.822 [2024-07-22 13:12:29.999730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:29440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.822 [2024-07-22 13:12:29.999739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.822 [2024-07-22 13:12:29.999750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:29696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.822 [2024-07-22 13:12:29.999759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.822 [2024-07-22 13:12:29.999769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:96856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.822 [2024-07-22 13:12:29.999780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.822 [2024-07-22 13:12:29.999796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:90232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.822 [2024-07-22 13:12:29.999806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.822 [2024-07-22 13:12:29.999817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:91256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.822 [2024-07-22 13:12:29.999826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.822 [2024-07-22 13:12:29.999836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:94104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.822 [2024-07-22 13:12:29.999846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.822 [2024-07-22 13:12:29.999857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:106600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.822 [2024-07-22 13:12:29.999866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.822 [2024-07-22 13:12:29.999876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:33872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.822 [2024-07-22 13:12:29.999885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.822 [2024-07-22 13:12:29.999896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:25920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.822 [2024-07-22 13:12:29.999905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.822 [2024-07-22 13:12:29.999916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:10488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.822 [2024-07-22 13:12:29.999930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.822 [2024-07-22 13:12:29.999941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:14000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.822 [2024-07-22 13:12:29.999957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.822 [2024-07-22 13:12:29.999968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:36808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.822 [2024-07-22 13:12:29.999978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.822 [2024-07-22 13:12:29.999989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:123256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.822 [2024-07-22 13:12:29.999998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.822 [2024-07-22 13:12:30.000011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:52032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.822 [2024-07-22 13:12:30.000020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.822 [2024-07-22 13:12:30.000033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:86024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.822 [2024-07-22 13:12:30.000042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.822 [2024-07-22 13:12:30.000052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:57008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.822 [2024-07-22 13:12:30.000062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.822 [2024-07-22 13:12:30.000072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:46792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.822 [2024-07-22 13:12:30.000081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.823 [2024-07-22 13:12:30.000092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:116448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.823 [2024-07-22 13:12:30.000101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.823 [2024-07-22 13:12:30.000112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:14552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.823 [2024-07-22 13:12:30.000126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.823 [2024-07-22 13:12:30.000150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:64064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.823 [2024-07-22 13:12:30.000160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.823 [2024-07-22 13:12:30.000171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:115800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.823 [2024-07-22 13:12:30.000181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.823 [2024-07-22 13:12:30.000192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:90584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.823 [2024-07-22 13:12:30.000201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.823 [2024-07-22 13:12:30.000211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:67336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.823 [2024-07-22 13:12:30.000221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.823 [2024-07-22 13:12:30.000232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:18760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.823 [2024-07-22 13:12:30.000240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.823 [2024-07-22 13:12:30.000251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:121544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.823 [2024-07-22 13:12:30.000260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.823 [2024-07-22 13:12:30.000277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:124856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.823 [2024-07-22 13:12:30.000292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.823 [2024-07-22 13:12:30.000313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:100864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.823 [2024-07-22 13:12:30.000322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.823 [2024-07-22 13:12:30.000333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:106504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.823 [2024-07-22 13:12:30.000342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.823 [2024-07-22 13:12:30.000353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:101432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.823 [2024-07-22 13:12:30.000362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.823 [2024-07-22 13:12:30.000373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:96640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.823 [2024-07-22 13:12:30.000382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.823 [2024-07-22 13:12:30.000392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:96912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.823 [2024-07-22 13:12:30.000402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.823 [2024-07-22 13:12:30.000418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:45120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.823 [2024-07-22 13:12:30.000429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.823 [2024-07-22 13:12:30.000439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:7248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.823 [2024-07-22 13:12:30.000450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.823 [2024-07-22 13:12:30.000462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:48568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.823 [2024-07-22 13:12:30.000474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.823 [2024-07-22 13:12:30.000498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:52024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.823 [2024-07-22 13:12:30.000507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.823 [2024-07-22 13:12:30.000523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:47408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.823 [2024-07-22 13:12:30.000533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.823 [2024-07-22 13:12:30.000544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:4144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.823 [2024-07-22 13:12:30.000553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.823 [2024-07-22 13:12:30.000564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:118872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.823 [2024-07-22 13:12:30.000573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.823 [2024-07-22 13:12:30.000583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:79808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.823 [2024-07-22 13:12:30.000592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.823 [2024-07-22 13:12:30.000603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:63400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.823 [2024-07-22 13:12:30.000612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.823 [2024-07-22 13:12:30.000622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:24192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.823 [2024-07-22 13:12:30.000631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.823 [2024-07-22 13:12:30.000647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:70392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:10.823 [2024-07-22 13:12:30.000656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.823 [2024-07-22 13:12:30.000686] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:50:10.823 [2024-07-22 13:12:30.000695] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:50:10.823 [2024-07-22 13:12:30.000704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:30112 len:8 PRP1 0x0 PRP2 0x0 00:50:10.823 [2024-07-22 13:12:30.000713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.823 [2024-07-22 13:12:30.000767] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1b1ca10 was disconnected and freed. reset controller. 00:50:10.823 [2024-07-22 13:12:30.001039] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:50:10.823 [2024-07-22 13:12:30.001122] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b11510 (9): Bad file descriptor 00:50:10.823 [2024-07-22 13:12:30.001259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:10.823 [2024-07-22 13:12:30.001313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:10.823 [2024-07-22 13:12:30.001330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b11510 with addr=10.0.0.2, port=4420 00:50:10.823 [2024-07-22 13:12:30.001342] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b11510 is same with the state(5) to be set 00:50:10.823 [2024-07-22 13:12:30.001363] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b11510 (9): Bad file descriptor 00:50:10.823 [2024-07-22 13:12:30.001380] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:50:10.823 [2024-07-22 13:12:30.001400] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:50:10.823 [2024-07-22 13:12:30.001414] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:50:10.823 [2024-07-22 13:12:30.001443] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:50:10.823 [2024-07-22 13:12:30.001454] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:50:10.823 13:12:30 -- host/timeout.sh@128 -- # wait 100176 00:50:12.725 [2024-07-22 13:12:32.001618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:12.725 [2024-07-22 13:12:32.001718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:12.725 [2024-07-22 13:12:32.001737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b11510 with addr=10.0.0.2, port=4420 00:50:12.725 [2024-07-22 13:12:32.001751] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b11510 is same with the state(5) to be set 00:50:12.725 [2024-07-22 13:12:32.001775] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b11510 (9): Bad file descriptor 00:50:12.725 [2024-07-22 13:12:32.001794] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:50:12.725 [2024-07-22 13:12:32.001804] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:50:12.725 [2024-07-22 13:12:32.001814] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:50:12.725 [2024-07-22 13:12:32.001856] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:50:12.725 [2024-07-22 13:12:32.001868] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:50:14.627 [2024-07-22 13:12:34.002059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:14.627 [2024-07-22 13:12:34.002197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:14.627 [2024-07-22 13:12:34.002216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b11510 with addr=10.0.0.2, port=4420 00:50:14.627 [2024-07-22 13:12:34.002247] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b11510 is same with the state(5) to be set 00:50:14.627 [2024-07-22 13:12:34.002273] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b11510 (9): Bad file descriptor 00:50:14.627 [2024-07-22 13:12:34.002291] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:50:14.627 [2024-07-22 13:12:34.002301] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:50:14.627 [2024-07-22 13:12:34.002311] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:50:14.627 [2024-07-22 13:12:34.002338] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:50:14.627 [2024-07-22 13:12:34.002350] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:50:17.157 [2024-07-22 13:12:36.002418] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:50:17.157 [2024-07-22 13:12:36.002475] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:50:17.157 [2024-07-22 13:12:36.002503] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:50:17.157 [2024-07-22 13:12:36.002513] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:50:17.157 [2024-07-22 13:12:36.002552] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:50:17.724 00:50:17.724 Latency(us) 00:50:17.724 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:50:17.724 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:50:17.724 NVMe0n1 : 8.13 2847.25 11.12 15.74 0.00 44670.21 3395.96 7015926.69 00:50:17.724 =================================================================================================================== 00:50:17.724 Total : 2847.25 11.12 15.74 0.00 44670.21 3395.96 7015926.69 00:50:17.724 0 00:50:17.724 13:12:37 -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:50:17.724 Attaching 5 probes... 00:50:17.724 1280.113904: reset bdev controller NVMe0 00:50:17.724 1280.262203: reconnect bdev controller NVMe0 00:50:17.724 3280.578373: reconnect delay bdev controller NVMe0 00:50:17.724 3280.613679: reconnect bdev controller NVMe0 00:50:17.724 5280.988879: reconnect delay bdev controller NVMe0 00:50:17.724 5281.027745: reconnect bdev controller NVMe0 00:50:17.724 7281.466954: reconnect delay bdev controller NVMe0 00:50:17.724 7281.484917: reconnect bdev controller NVMe0 00:50:17.724 13:12:37 -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:50:17.724 13:12:37 -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:50:17.724 13:12:37 -- host/timeout.sh@136 -- # kill 100122 00:50:17.724 13:12:37 -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:50:17.724 13:12:37 -- host/timeout.sh@139 -- # killprocess 100094 00:50:17.724 13:12:37 -- common/autotest_common.sh@926 -- # '[' -z 100094 ']' 00:50:17.724 13:12:37 -- common/autotest_common.sh@930 -- # kill -0 100094 00:50:17.724 13:12:37 -- common/autotest_common.sh@931 -- # uname 00:50:17.724 13:12:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:50:17.724 13:12:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 100094 00:50:17.724 killing process with pid 100094 00:50:17.724 Received shutdown signal, test time was about 8.196100 seconds 00:50:17.724 00:50:17.724 Latency(us) 00:50:17.724 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:50:17.724 =================================================================================================================== 00:50:17.724 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:50:17.724 13:12:37 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:50:17.724 13:12:37 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:50:17.724 13:12:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 100094' 00:50:17.724 13:12:37 -- common/autotest_common.sh@945 -- # kill 100094 00:50:17.724 13:12:37 -- common/autotest_common.sh@950 -- # wait 100094 00:50:17.983 13:12:37 -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:50:18.241 13:12:37 -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:50:18.241 13:12:37 -- host/timeout.sh@145 -- # nvmftestfini 00:50:18.241 13:12:37 -- nvmf/common.sh@476 -- # nvmfcleanup 00:50:18.241 13:12:37 -- nvmf/common.sh@116 -- # sync 00:50:18.241 13:12:37 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:50:18.241 13:12:37 -- nvmf/common.sh@119 -- # set +e 00:50:18.242 13:12:37 -- nvmf/common.sh@120 -- # for i in {1..20} 00:50:18.242 13:12:37 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:50:18.242 rmmod nvme_tcp 00:50:18.242 rmmod nvme_fabrics 00:50:18.242 rmmod nvme_keyring 00:50:18.242 13:12:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:50:18.242 13:12:37 -- nvmf/common.sh@123 -- # set -e 00:50:18.242 13:12:37 -- nvmf/common.sh@124 -- # return 0 00:50:18.242 13:12:37 -- nvmf/common.sh@477 -- # '[' -n 99513 ']' 00:50:18.242 13:12:37 -- nvmf/common.sh@478 -- # killprocess 99513 00:50:18.242 13:12:37 -- common/autotest_common.sh@926 -- # '[' -z 99513 ']' 00:50:18.242 13:12:37 -- common/autotest_common.sh@930 -- # kill -0 99513 00:50:18.242 13:12:37 -- common/autotest_common.sh@931 -- # uname 00:50:18.242 13:12:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:50:18.242 13:12:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 99513 00:50:18.500 killing process with pid 99513 00:50:18.500 13:12:37 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:50:18.500 13:12:37 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:50:18.500 13:12:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 99513' 00:50:18.500 13:12:37 -- common/autotest_common.sh@945 -- # kill 99513 00:50:18.500 13:12:37 -- common/autotest_common.sh@950 -- # wait 99513 00:50:18.500 13:12:37 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:50:18.500 13:12:37 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:50:18.500 13:12:37 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:50:18.500 13:12:37 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:50:18.500 13:12:37 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:50:18.500 13:12:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:50:18.501 13:12:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:50:18.501 13:12:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:50:18.759 13:12:37 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:50:18.759 00:50:18.759 real 0m46.894s 00:50:18.759 user 2m17.719s 00:50:18.759 sys 0m5.153s 00:50:18.759 13:12:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:50:18.759 13:12:37 -- common/autotest_common.sh@10 -- # set +x 00:50:18.759 ************************************ 00:50:18.759 END TEST nvmf_timeout 00:50:18.759 ************************************ 00:50:18.759 13:12:37 -- nvmf/nvmf.sh@120 -- # [[ virt == phy ]] 00:50:18.759 13:12:37 -- nvmf/nvmf.sh@127 -- # timing_exit host 00:50:18.759 13:12:37 -- common/autotest_common.sh@718 -- # xtrace_disable 00:50:18.759 13:12:37 -- common/autotest_common.sh@10 -- # set +x 00:50:18.759 13:12:38 -- nvmf/nvmf.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:50:18.759 ************************************ 00:50:18.759 END TEST nvmf_tcp 00:50:18.759 ************************************ 00:50:18.759 00:50:18.759 real 17m9.338s 00:50:18.759 user 54m32.390s 00:50:18.759 sys 3m51.561s 00:50:18.759 13:12:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:50:18.759 13:12:38 -- common/autotest_common.sh@10 -- # set +x 00:50:18.759 13:12:38 -- spdk/autotest.sh@296 -- # [[ 0 -eq 0 ]] 00:50:18.759 13:12:38 -- spdk/autotest.sh@297 -- # run_test spdkcli_nvmf_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:50:18.759 13:12:38 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:50:18.759 13:12:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:50:18.759 13:12:38 -- common/autotest_common.sh@10 -- # set +x 00:50:18.759 ************************************ 00:50:18.759 START TEST spdkcli_nvmf_tcp 00:50:18.759 ************************************ 00:50:18.759 13:12:38 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:50:18.759 * Looking for test storage... 00:50:18.759 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:50:18.759 13:12:38 -- spdkcli/nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:50:18.759 13:12:38 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:50:18.759 13:12:38 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:50:18.759 13:12:38 -- spdkcli/nvmf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:50:18.759 13:12:38 -- nvmf/common.sh@7 -- # uname -s 00:50:18.759 13:12:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:50:18.759 13:12:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:50:18.759 13:12:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:50:18.759 13:12:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:50:18.759 13:12:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:50:18.759 13:12:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:50:18.759 13:12:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:50:18.759 13:12:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:50:18.759 13:12:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:50:18.759 13:12:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:50:18.759 13:12:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:864ae4a3-cd96-439b-8ef2-6dab4d992115 00:50:18.759 13:12:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=864ae4a3-cd96-439b-8ef2-6dab4d992115 00:50:18.759 13:12:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:50:18.759 13:12:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:50:18.759 13:12:38 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:50:18.759 13:12:38 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:50:18.759 13:12:38 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:50:18.759 13:12:38 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:50:18.759 13:12:38 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:50:18.759 13:12:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:50:18.759 13:12:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:50:18.759 13:12:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:50:18.759 13:12:38 -- paths/export.sh@5 -- # export PATH 00:50:18.759 13:12:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:50:19.018 13:12:38 -- nvmf/common.sh@46 -- # : 0 00:50:19.018 13:12:38 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:50:19.018 13:12:38 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:50:19.018 13:12:38 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:50:19.018 13:12:38 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:50:19.018 13:12:38 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:50:19.018 13:12:38 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:50:19.018 13:12:38 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:50:19.018 13:12:38 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:50:19.018 13:12:38 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:50:19.018 13:12:38 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:50:19.018 13:12:38 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:50:19.018 13:12:38 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:50:19.018 13:12:38 -- common/autotest_common.sh@712 -- # xtrace_disable 00:50:19.018 13:12:38 -- common/autotest_common.sh@10 -- # set +x 00:50:19.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:50:19.018 13:12:38 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:50:19.018 13:12:38 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=100391 00:50:19.018 13:12:38 -- spdkcli/common.sh@34 -- # waitforlisten 100391 00:50:19.018 13:12:38 -- common/autotest_common.sh@819 -- # '[' -z 100391 ']' 00:50:19.018 13:12:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:50:19.018 13:12:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:50:19.018 13:12:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:50:19.018 13:12:38 -- spdkcli/common.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:50:19.018 13:12:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:50:19.018 13:12:38 -- common/autotest_common.sh@10 -- # set +x 00:50:19.018 [2024-07-22 13:12:38.248764] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:50:19.018 [2024-07-22 13:12:38.249649] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100391 ] 00:50:19.018 [2024-07-22 13:12:38.389046] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:50:19.276 [2024-07-22 13:12:38.465263] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:50:19.276 [2024-07-22 13:12:38.465809] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:50:19.276 [2024-07-22 13:12:38.465965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:50:19.842 13:12:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:50:19.842 13:12:39 -- common/autotest_common.sh@852 -- # return 0 00:50:19.842 13:12:39 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:50:19.842 13:12:39 -- common/autotest_common.sh@718 -- # xtrace_disable 00:50:19.842 13:12:39 -- common/autotest_common.sh@10 -- # set +x 00:50:20.100 13:12:39 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:50:20.100 13:12:39 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:50:20.100 13:12:39 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:50:20.100 13:12:39 -- common/autotest_common.sh@712 -- # xtrace_disable 00:50:20.100 13:12:39 -- common/autotest_common.sh@10 -- # set +x 00:50:20.100 13:12:39 -- spdkcli/nvmf.sh@65 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:50:20.100 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:50:20.100 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:50:20.100 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:50:20.100 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:50:20.100 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:50:20.100 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:50:20.100 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:50:20.100 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:50:20.100 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:50:20.100 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:50:20.100 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:50:20.100 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:50:20.100 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:50:20.100 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:50:20.100 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:50:20.100 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:50:20.100 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:50:20.100 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:50:20.101 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:50:20.101 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:50:20.101 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:50:20.101 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:50:20.101 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:50:20.101 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:50:20.101 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:50:20.101 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:50:20.101 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:50:20.101 ' 00:50:20.359 [2024-07-22 13:12:39.682429] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:50:22.912 [2024-07-22 13:12:41.901870] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:50:23.847 [2024-07-22 13:12:43.175372] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:50:26.377 [2024-07-22 13:12:45.537419] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:50:28.277 [2024-07-22 13:12:47.571231] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:50:30.180 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:50:30.180 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:50:30.180 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:50:30.180 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:50:30.180 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:50:30.180 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:50:30.180 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:50:30.180 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:50:30.180 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:50:30.180 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:50:30.180 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:50:30.180 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:50:30.180 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:50:30.180 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:50:30.180 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:50:30.180 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:50:30.180 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:50:30.180 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:50:30.180 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:50:30.180 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:50:30.180 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:50:30.180 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:50:30.180 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:50:30.180 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:50:30.180 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:50:30.180 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:50:30.180 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:50:30.180 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:50:30.180 13:12:49 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:50:30.180 13:12:49 -- common/autotest_common.sh@718 -- # xtrace_disable 00:50:30.180 13:12:49 -- common/autotest_common.sh@10 -- # set +x 00:50:30.180 13:12:49 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:50:30.180 13:12:49 -- common/autotest_common.sh@712 -- # xtrace_disable 00:50:30.180 13:12:49 -- common/autotest_common.sh@10 -- # set +x 00:50:30.180 13:12:49 -- spdkcli/nvmf.sh@69 -- # check_match 00:50:30.180 13:12:49 -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /nvmf 00:50:30.439 13:12:49 -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:50:30.439 13:12:49 -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:50:30.439 13:12:49 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:50:30.439 13:12:49 -- common/autotest_common.sh@718 -- # xtrace_disable 00:50:30.439 13:12:49 -- common/autotest_common.sh@10 -- # set +x 00:50:30.439 13:12:49 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:50:30.439 13:12:49 -- common/autotest_common.sh@712 -- # xtrace_disable 00:50:30.439 13:12:49 -- common/autotest_common.sh@10 -- # set +x 00:50:30.439 13:12:49 -- spdkcli/nvmf.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:50:30.439 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:50:30.439 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:50:30.439 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:50:30.439 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:50:30.439 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:50:30.439 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:50:30.439 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:50:30.439 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:50:30.439 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:50:30.439 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:50:30.439 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:50:30.439 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:50:30.439 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:50:30.439 ' 00:50:35.833 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:50:35.833 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:50:35.833 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:50:35.833 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:50:35.834 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:50:35.834 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:50:35.834 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:50:35.834 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:50:35.834 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:50:35.834 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:50:35.834 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:50:35.834 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:50:35.834 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:50:35.834 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:50:35.834 13:12:55 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:50:35.834 13:12:55 -- common/autotest_common.sh@718 -- # xtrace_disable 00:50:35.834 13:12:55 -- common/autotest_common.sh@10 -- # set +x 00:50:35.834 13:12:55 -- spdkcli/nvmf.sh@90 -- # killprocess 100391 00:50:35.834 13:12:55 -- common/autotest_common.sh@926 -- # '[' -z 100391 ']' 00:50:35.834 13:12:55 -- common/autotest_common.sh@930 -- # kill -0 100391 00:50:35.834 13:12:55 -- common/autotest_common.sh@931 -- # uname 00:50:35.834 13:12:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:50:35.834 13:12:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 100391 00:50:35.834 13:12:55 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:50:35.834 13:12:55 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:50:35.834 13:12:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 100391' 00:50:35.834 killing process with pid 100391 00:50:35.834 13:12:55 -- common/autotest_common.sh@945 -- # kill 100391 00:50:35.834 [2024-07-22 13:12:55.248522] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:50:35.834 13:12:55 -- common/autotest_common.sh@950 -- # wait 100391 00:50:36.093 Process with pid 100391 is not found 00:50:36.093 13:12:55 -- spdkcli/nvmf.sh@1 -- # cleanup 00:50:36.093 13:12:55 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:50:36.093 13:12:55 -- spdkcli/common.sh@13 -- # '[' -n 100391 ']' 00:50:36.093 13:12:55 -- spdkcli/common.sh@14 -- # killprocess 100391 00:50:36.093 13:12:55 -- common/autotest_common.sh@926 -- # '[' -z 100391 ']' 00:50:36.093 13:12:55 -- common/autotest_common.sh@930 -- # kill -0 100391 00:50:36.093 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (100391) - No such process 00:50:36.093 13:12:55 -- common/autotest_common.sh@953 -- # echo 'Process with pid 100391 is not found' 00:50:36.093 13:12:55 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:50:36.093 13:12:55 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:50:36.093 13:12:55 -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_nvmf.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:50:36.093 ************************************ 00:50:36.093 END TEST spdkcli_nvmf_tcp 00:50:36.093 ************************************ 00:50:36.093 00:50:36.093 real 0m17.364s 00:50:36.093 user 0m37.243s 00:50:36.093 sys 0m1.048s 00:50:36.093 13:12:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:50:36.093 13:12:55 -- common/autotest_common.sh@10 -- # set +x 00:50:36.093 13:12:55 -- spdk/autotest.sh@298 -- # run_test nvmf_identify_passthru /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:50:36.093 13:12:55 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:50:36.093 13:12:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:50:36.093 13:12:55 -- common/autotest_common.sh@10 -- # set +x 00:50:36.093 ************************************ 00:50:36.093 START TEST nvmf_identify_passthru 00:50:36.093 ************************************ 00:50:36.093 13:12:55 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:50:36.351 * Looking for test storage... 00:50:36.351 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:50:36.351 13:12:55 -- target/identify_passthru.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:50:36.351 13:12:55 -- nvmf/common.sh@7 -- # uname -s 00:50:36.351 13:12:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:50:36.351 13:12:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:50:36.351 13:12:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:50:36.351 13:12:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:50:36.351 13:12:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:50:36.351 13:12:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:50:36.351 13:12:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:50:36.351 13:12:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:50:36.352 13:12:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:50:36.352 13:12:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:50:36.352 13:12:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:864ae4a3-cd96-439b-8ef2-6dab4d992115 00:50:36.352 13:12:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=864ae4a3-cd96-439b-8ef2-6dab4d992115 00:50:36.352 13:12:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:50:36.352 13:12:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:50:36.352 13:12:55 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:50:36.352 13:12:55 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:50:36.352 13:12:55 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:50:36.352 13:12:55 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:50:36.352 13:12:55 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:50:36.352 13:12:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:50:36.352 13:12:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:50:36.352 13:12:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:50:36.352 13:12:55 -- paths/export.sh@5 -- # export PATH 00:50:36.352 13:12:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:50:36.352 13:12:55 -- nvmf/common.sh@46 -- # : 0 00:50:36.352 13:12:55 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:50:36.352 13:12:55 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:50:36.352 13:12:55 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:50:36.352 13:12:55 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:50:36.352 13:12:55 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:50:36.352 13:12:55 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:50:36.352 13:12:55 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:50:36.352 13:12:55 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:50:36.352 13:12:55 -- target/identify_passthru.sh@10 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:50:36.352 13:12:55 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:50:36.352 13:12:55 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:50:36.352 13:12:55 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:50:36.352 13:12:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:50:36.352 13:12:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:50:36.352 13:12:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:50:36.352 13:12:55 -- paths/export.sh@5 -- # export PATH 00:50:36.352 13:12:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:50:36.352 13:12:55 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:50:36.352 13:12:55 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:50:36.352 13:12:55 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:50:36.352 13:12:55 -- nvmf/common.sh@436 -- # prepare_net_devs 00:50:36.352 13:12:55 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:50:36.352 13:12:55 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:50:36.352 13:12:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:50:36.352 13:12:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:50:36.352 13:12:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:50:36.352 13:12:55 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:50:36.352 13:12:55 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:50:36.352 13:12:55 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:50:36.352 13:12:55 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:50:36.352 13:12:55 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:50:36.352 13:12:55 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:50:36.352 13:12:55 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:50:36.352 13:12:55 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:50:36.352 13:12:55 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:50:36.352 13:12:55 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:50:36.352 13:12:55 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:50:36.352 13:12:55 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:50:36.352 13:12:55 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:50:36.352 13:12:55 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:50:36.352 13:12:55 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:50:36.352 13:12:55 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:50:36.352 13:12:55 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:50:36.352 13:12:55 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:50:36.352 13:12:55 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:50:36.352 13:12:55 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:50:36.352 Cannot find device "nvmf_tgt_br" 00:50:36.352 13:12:55 -- nvmf/common.sh@154 -- # true 00:50:36.352 13:12:55 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:50:36.352 Cannot find device "nvmf_tgt_br2" 00:50:36.352 13:12:55 -- nvmf/common.sh@155 -- # true 00:50:36.352 13:12:55 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:50:36.352 13:12:55 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:50:36.352 Cannot find device "nvmf_tgt_br" 00:50:36.352 13:12:55 -- nvmf/common.sh@157 -- # true 00:50:36.352 13:12:55 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:50:36.352 Cannot find device "nvmf_tgt_br2" 00:50:36.352 13:12:55 -- nvmf/common.sh@158 -- # true 00:50:36.352 13:12:55 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:50:36.352 13:12:55 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:50:36.352 13:12:55 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:50:36.352 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:50:36.352 13:12:55 -- nvmf/common.sh@161 -- # true 00:50:36.352 13:12:55 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:50:36.352 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:50:36.352 13:12:55 -- nvmf/common.sh@162 -- # true 00:50:36.352 13:12:55 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:50:36.352 13:12:55 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:50:36.352 13:12:55 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:50:36.352 13:12:55 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:50:36.352 13:12:55 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:50:36.352 13:12:55 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:50:36.611 13:12:55 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:50:36.611 13:12:55 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:50:36.611 13:12:55 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:50:36.611 13:12:55 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:50:36.611 13:12:55 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:50:36.611 13:12:55 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:50:36.611 13:12:55 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:50:36.611 13:12:55 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:50:36.611 13:12:55 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:50:36.611 13:12:55 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:50:36.611 13:12:55 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:50:36.611 13:12:55 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:50:36.611 13:12:55 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:50:36.611 13:12:55 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:50:36.611 13:12:55 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:50:36.611 13:12:55 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:50:36.611 13:12:55 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:50:36.611 13:12:55 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:50:36.611 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:50:36.611 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:50:36.611 00:50:36.611 --- 10.0.0.2 ping statistics --- 00:50:36.611 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:50:36.611 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:50:36.611 13:12:55 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:50:36.611 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:50:36.611 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:50:36.611 00:50:36.611 --- 10.0.0.3 ping statistics --- 00:50:36.611 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:50:36.611 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:50:36.611 13:12:55 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:50:36.611 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:50:36.611 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:50:36.611 00:50:36.611 --- 10.0.0.1 ping statistics --- 00:50:36.611 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:50:36.611 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:50:36.611 13:12:55 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:50:36.611 13:12:55 -- nvmf/common.sh@421 -- # return 0 00:50:36.611 13:12:55 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:50:36.611 13:12:55 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:50:36.611 13:12:55 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:50:36.611 13:12:55 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:50:36.611 13:12:55 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:50:36.611 13:12:55 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:50:36.611 13:12:55 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:50:36.611 13:12:55 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:50:36.611 13:12:55 -- common/autotest_common.sh@712 -- # xtrace_disable 00:50:36.611 13:12:55 -- common/autotest_common.sh@10 -- # set +x 00:50:36.611 13:12:55 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:50:36.611 13:12:55 -- common/autotest_common.sh@1509 -- # bdfs=() 00:50:36.611 13:12:55 -- common/autotest_common.sh@1509 -- # local bdfs 00:50:36.611 13:12:55 -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:50:36.611 13:12:55 -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:50:36.611 13:12:55 -- common/autotest_common.sh@1498 -- # bdfs=() 00:50:36.611 13:12:55 -- common/autotest_common.sh@1498 -- # local bdfs 00:50:36.611 13:12:55 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:50:36.611 13:12:55 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:50:36.611 13:12:55 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:50:36.611 13:12:55 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:50:36.611 13:12:55 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:50:36.611 13:12:55 -- common/autotest_common.sh@1512 -- # echo 0000:00:06.0 00:50:36.611 13:12:55 -- target/identify_passthru.sh@16 -- # bdf=0000:00:06.0 00:50:36.611 13:12:55 -- target/identify_passthru.sh@17 -- # '[' -z 0000:00:06.0 ']' 00:50:36.611 13:12:55 -- target/identify_passthru.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:50:36.611 13:12:55 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:50:36.611 13:12:55 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:50:36.869 13:12:56 -- target/identify_passthru.sh@23 -- # nvme_serial_number=12340 00:50:36.869 13:12:56 -- target/identify_passthru.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:50:36.869 13:12:56 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:50:36.869 13:12:56 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:50:37.128 13:12:56 -- target/identify_passthru.sh@24 -- # nvme_model_number=QEMU 00:50:37.128 13:12:56 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:50:37.128 13:12:56 -- common/autotest_common.sh@718 -- # xtrace_disable 00:50:37.128 13:12:56 -- common/autotest_common.sh@10 -- # set +x 00:50:37.128 13:12:56 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:50:37.128 13:12:56 -- common/autotest_common.sh@712 -- # xtrace_disable 00:50:37.128 13:12:56 -- common/autotest_common.sh@10 -- # set +x 00:50:37.128 13:12:56 -- target/identify_passthru.sh@31 -- # nvmfpid=100885 00:50:37.128 13:12:56 -- target/identify_passthru.sh@30 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:50:37.128 13:12:56 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:50:37.128 13:12:56 -- target/identify_passthru.sh@35 -- # waitforlisten 100885 00:50:37.128 13:12:56 -- common/autotest_common.sh@819 -- # '[' -z 100885 ']' 00:50:37.128 13:12:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:50:37.128 13:12:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:50:37.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:50:37.128 13:12:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:50:37.128 13:12:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:50:37.128 13:12:56 -- common/autotest_common.sh@10 -- # set +x 00:50:37.128 [2024-07-22 13:12:56.415633] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:50:37.128 [2024-07-22 13:12:56.415725] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:50:37.386 [2024-07-22 13:12:56.549749] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:50:37.386 [2024-07-22 13:12:56.613244] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:50:37.386 [2024-07-22 13:12:56.613390] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:50:37.386 [2024-07-22 13:12:56.613402] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:50:37.386 [2024-07-22 13:12:56.613411] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:50:37.386 [2024-07-22 13:12:56.613563] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:50:37.386 [2024-07-22 13:12:56.613729] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:50:37.386 [2024-07-22 13:12:56.613870] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:50:37.386 [2024-07-22 13:12:56.613873] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:50:37.951 13:12:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:50:37.951 13:12:57 -- common/autotest_common.sh@852 -- # return 0 00:50:37.951 13:12:57 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:50:37.951 13:12:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:50:37.951 13:12:57 -- common/autotest_common.sh@10 -- # set +x 00:50:37.951 13:12:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:50:37.951 13:12:57 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:50:37.951 13:12:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:50:37.951 13:12:57 -- common/autotest_common.sh@10 -- # set +x 00:50:38.209 [2024-07-22 13:12:57.425470] nvmf_tgt.c: 423:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:50:38.209 13:12:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:50:38.209 13:12:57 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:50:38.209 13:12:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:50:38.209 13:12:57 -- common/autotest_common.sh@10 -- # set +x 00:50:38.209 [2024-07-22 13:12:57.439536] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:50:38.209 13:12:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:50:38.209 13:12:57 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:50:38.209 13:12:57 -- common/autotest_common.sh@718 -- # xtrace_disable 00:50:38.209 13:12:57 -- common/autotest_common.sh@10 -- # set +x 00:50:38.209 13:12:57 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 00:50:38.209 13:12:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:50:38.209 13:12:57 -- common/autotest_common.sh@10 -- # set +x 00:50:38.209 Nvme0n1 00:50:38.209 13:12:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:50:38.209 13:12:57 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:50:38.209 13:12:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:50:38.209 13:12:57 -- common/autotest_common.sh@10 -- # set +x 00:50:38.209 13:12:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:50:38.209 13:12:57 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:50:38.209 13:12:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:50:38.209 13:12:57 -- common/autotest_common.sh@10 -- # set +x 00:50:38.209 13:12:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:50:38.209 13:12:57 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:50:38.209 13:12:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:50:38.209 13:12:57 -- common/autotest_common.sh@10 -- # set +x 00:50:38.210 [2024-07-22 13:12:57.575898] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:50:38.210 13:12:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:50:38.210 13:12:57 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:50:38.210 13:12:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:50:38.210 13:12:57 -- common/autotest_common.sh@10 -- # set +x 00:50:38.210 [2024-07-22 13:12:57.583698] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:50:38.210 [ 00:50:38.210 { 00:50:38.210 "allow_any_host": true, 00:50:38.210 "hosts": [], 00:50:38.210 "listen_addresses": [], 00:50:38.210 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:50:38.210 "subtype": "Discovery" 00:50:38.210 }, 00:50:38.210 { 00:50:38.210 "allow_any_host": true, 00:50:38.210 "hosts": [], 00:50:38.210 "listen_addresses": [ 00:50:38.210 { 00:50:38.210 "adrfam": "IPv4", 00:50:38.210 "traddr": "10.0.0.2", 00:50:38.210 "transport": "TCP", 00:50:38.210 "trsvcid": "4420", 00:50:38.210 "trtype": "TCP" 00:50:38.210 } 00:50:38.210 ], 00:50:38.210 "max_cntlid": 65519, 00:50:38.210 "max_namespaces": 1, 00:50:38.210 "min_cntlid": 1, 00:50:38.210 "model_number": "SPDK bdev Controller", 00:50:38.210 "namespaces": [ 00:50:38.210 { 00:50:38.210 "bdev_name": "Nvme0n1", 00:50:38.210 "name": "Nvme0n1", 00:50:38.210 "nguid": "9AE40335F6CC45199D3A99C3EBB3EF78", 00:50:38.210 "nsid": 1, 00:50:38.210 "uuid": "9ae40335-f6cc-4519-9d3a-99c3ebb3ef78" 00:50:38.210 } 00:50:38.210 ], 00:50:38.210 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:50:38.210 "serial_number": "SPDK00000000000001", 00:50:38.210 "subtype": "NVMe" 00:50:38.210 } 00:50:38.210 ] 00:50:38.210 13:12:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:50:38.210 13:12:57 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:50:38.210 13:12:57 -- target/identify_passthru.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:50:38.210 13:12:57 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:50:38.468 13:12:57 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=12340 00:50:38.468 13:12:57 -- target/identify_passthru.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:50:38.468 13:12:57 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:50:38.468 13:12:57 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:50:38.726 13:12:58 -- target/identify_passthru.sh@61 -- # nvmf_model_number=QEMU 00:50:38.726 13:12:58 -- target/identify_passthru.sh@63 -- # '[' 12340 '!=' 12340 ']' 00:50:38.726 13:12:58 -- target/identify_passthru.sh@68 -- # '[' QEMU '!=' QEMU ']' 00:50:38.726 13:12:58 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:50:38.726 13:12:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:50:38.726 13:12:58 -- common/autotest_common.sh@10 -- # set +x 00:50:38.726 13:12:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:50:38.726 13:12:58 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:50:38.726 13:12:58 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:50:38.726 13:12:58 -- nvmf/common.sh@476 -- # nvmfcleanup 00:50:38.726 13:12:58 -- nvmf/common.sh@116 -- # sync 00:50:38.726 13:12:58 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:50:38.726 13:12:58 -- nvmf/common.sh@119 -- # set +e 00:50:38.726 13:12:58 -- nvmf/common.sh@120 -- # for i in {1..20} 00:50:38.726 13:12:58 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:50:38.726 rmmod nvme_tcp 00:50:38.726 rmmod nvme_fabrics 00:50:38.726 rmmod nvme_keyring 00:50:38.985 13:12:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:50:38.985 13:12:58 -- nvmf/common.sh@123 -- # set -e 00:50:38.985 13:12:58 -- nvmf/common.sh@124 -- # return 0 00:50:38.985 13:12:58 -- nvmf/common.sh@477 -- # '[' -n 100885 ']' 00:50:38.985 13:12:58 -- nvmf/common.sh@478 -- # killprocess 100885 00:50:38.985 13:12:58 -- common/autotest_common.sh@926 -- # '[' -z 100885 ']' 00:50:38.985 13:12:58 -- common/autotest_common.sh@930 -- # kill -0 100885 00:50:38.985 13:12:58 -- common/autotest_common.sh@931 -- # uname 00:50:38.985 13:12:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:50:38.985 13:12:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 100885 00:50:38.985 13:12:58 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:50:38.985 13:12:58 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:50:38.985 killing process with pid 100885 00:50:38.985 13:12:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 100885' 00:50:38.985 13:12:58 -- common/autotest_common.sh@945 -- # kill 100885 00:50:38.985 [2024-07-22 13:12:58.179509] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:50:38.985 13:12:58 -- common/autotest_common.sh@950 -- # wait 100885 00:50:38.985 13:12:58 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:50:38.985 13:12:58 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:50:38.985 13:12:58 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:50:38.985 13:12:58 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:50:38.985 13:12:58 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:50:38.985 13:12:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:50:38.985 13:12:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:50:38.985 13:12:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:50:38.985 13:12:58 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:50:39.299 00:50:39.299 real 0m2.912s 00:50:39.299 user 0m7.344s 00:50:39.299 sys 0m0.761s 00:50:39.299 13:12:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:50:39.299 13:12:58 -- common/autotest_common.sh@10 -- # set +x 00:50:39.299 ************************************ 00:50:39.299 END TEST nvmf_identify_passthru 00:50:39.299 ************************************ 00:50:39.300 13:12:58 -- spdk/autotest.sh@300 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:50:39.300 13:12:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:50:39.300 13:12:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:50:39.300 13:12:58 -- common/autotest_common.sh@10 -- # set +x 00:50:39.300 ************************************ 00:50:39.300 START TEST nvmf_dif 00:50:39.300 ************************************ 00:50:39.300 13:12:58 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:50:39.300 * Looking for test storage... 00:50:39.300 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:50:39.300 13:12:58 -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:50:39.300 13:12:58 -- nvmf/common.sh@7 -- # uname -s 00:50:39.300 13:12:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:50:39.300 13:12:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:50:39.300 13:12:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:50:39.300 13:12:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:50:39.300 13:12:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:50:39.300 13:12:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:50:39.300 13:12:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:50:39.300 13:12:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:50:39.300 13:12:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:50:39.300 13:12:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:50:39.300 13:12:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:864ae4a3-cd96-439b-8ef2-6dab4d992115 00:50:39.300 13:12:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=864ae4a3-cd96-439b-8ef2-6dab4d992115 00:50:39.300 13:12:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:50:39.300 13:12:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:50:39.300 13:12:58 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:50:39.300 13:12:58 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:50:39.300 13:12:58 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:50:39.300 13:12:58 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:50:39.300 13:12:58 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:50:39.300 13:12:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:50:39.300 13:12:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:50:39.300 13:12:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:50:39.300 13:12:58 -- paths/export.sh@5 -- # export PATH 00:50:39.300 13:12:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:50:39.300 13:12:58 -- nvmf/common.sh@46 -- # : 0 00:50:39.300 13:12:58 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:50:39.300 13:12:58 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:50:39.300 13:12:58 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:50:39.300 13:12:58 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:50:39.300 13:12:58 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:50:39.300 13:12:58 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:50:39.300 13:12:58 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:50:39.300 13:12:58 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:50:39.300 13:12:58 -- target/dif.sh@15 -- # NULL_META=16 00:50:39.300 13:12:58 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:50:39.300 13:12:58 -- target/dif.sh@15 -- # NULL_SIZE=64 00:50:39.300 13:12:58 -- target/dif.sh@15 -- # NULL_DIF=1 00:50:39.300 13:12:58 -- target/dif.sh@135 -- # nvmftestinit 00:50:39.300 13:12:58 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:50:39.300 13:12:58 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:50:39.300 13:12:58 -- nvmf/common.sh@436 -- # prepare_net_devs 00:50:39.300 13:12:58 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:50:39.300 13:12:58 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:50:39.300 13:12:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:50:39.300 13:12:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:50:39.300 13:12:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:50:39.300 13:12:58 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:50:39.300 13:12:58 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:50:39.300 13:12:58 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:50:39.300 13:12:58 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:50:39.300 13:12:58 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:50:39.300 13:12:58 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:50:39.300 13:12:58 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:50:39.300 13:12:58 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:50:39.300 13:12:58 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:50:39.300 13:12:58 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:50:39.300 13:12:58 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:50:39.300 13:12:58 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:50:39.300 13:12:58 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:50:39.300 13:12:58 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:50:39.300 13:12:58 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:50:39.300 13:12:58 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:50:39.300 13:12:58 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:50:39.300 13:12:58 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:50:39.300 13:12:58 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:50:39.300 13:12:58 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:50:39.300 Cannot find device "nvmf_tgt_br" 00:50:39.300 13:12:58 -- nvmf/common.sh@154 -- # true 00:50:39.300 13:12:58 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:50:39.300 Cannot find device "nvmf_tgt_br2" 00:50:39.300 13:12:58 -- nvmf/common.sh@155 -- # true 00:50:39.300 13:12:58 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:50:39.300 13:12:58 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:50:39.300 Cannot find device "nvmf_tgt_br" 00:50:39.300 13:12:58 -- nvmf/common.sh@157 -- # true 00:50:39.300 13:12:58 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:50:39.300 Cannot find device "nvmf_tgt_br2" 00:50:39.300 13:12:58 -- nvmf/common.sh@158 -- # true 00:50:39.300 13:12:58 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:50:39.300 13:12:58 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:50:39.300 13:12:58 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:50:39.300 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:50:39.300 13:12:58 -- nvmf/common.sh@161 -- # true 00:50:39.300 13:12:58 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:50:39.300 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:50:39.300 13:12:58 -- nvmf/common.sh@162 -- # true 00:50:39.300 13:12:58 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:50:39.300 13:12:58 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:50:39.300 13:12:58 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:50:39.571 13:12:58 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:50:39.571 13:12:58 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:50:39.571 13:12:58 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:50:39.572 13:12:58 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:50:39.572 13:12:58 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:50:39.572 13:12:58 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:50:39.572 13:12:58 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:50:39.572 13:12:58 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:50:39.572 13:12:58 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:50:39.572 13:12:58 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:50:39.572 13:12:58 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:50:39.572 13:12:58 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:50:39.572 13:12:58 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:50:39.572 13:12:58 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:50:39.572 13:12:58 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:50:39.572 13:12:58 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:50:39.572 13:12:58 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:50:39.572 13:12:58 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:50:39.572 13:12:58 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:50:39.572 13:12:58 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:50:39.572 13:12:58 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:50:39.572 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:50:39.572 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:50:39.572 00:50:39.572 --- 10.0.0.2 ping statistics --- 00:50:39.572 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:50:39.572 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:50:39.572 13:12:58 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:50:39.572 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:50:39.572 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:50:39.572 00:50:39.572 --- 10.0.0.3 ping statistics --- 00:50:39.572 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:50:39.572 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:50:39.572 13:12:58 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:50:39.572 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:50:39.572 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:50:39.572 00:50:39.572 --- 10.0.0.1 ping statistics --- 00:50:39.572 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:50:39.572 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:50:39.572 13:12:58 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:50:39.572 13:12:58 -- nvmf/common.sh@421 -- # return 0 00:50:39.572 13:12:58 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:50:39.572 13:12:58 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:50:39.830 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:50:39.830 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:50:39.830 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:50:39.830 13:12:59 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:50:39.830 13:12:59 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:50:39.830 13:12:59 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:50:39.830 13:12:59 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:50:39.830 13:12:59 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:50:39.830 13:12:59 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:50:40.089 13:12:59 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:50:40.089 13:12:59 -- target/dif.sh@137 -- # nvmfappstart 00:50:40.089 13:12:59 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:50:40.089 13:12:59 -- common/autotest_common.sh@712 -- # xtrace_disable 00:50:40.089 13:12:59 -- common/autotest_common.sh@10 -- # set +x 00:50:40.089 13:12:59 -- nvmf/common.sh@469 -- # nvmfpid=101228 00:50:40.089 13:12:59 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:50:40.089 13:12:59 -- nvmf/common.sh@470 -- # waitforlisten 101228 00:50:40.089 13:12:59 -- common/autotest_common.sh@819 -- # '[' -z 101228 ']' 00:50:40.089 13:12:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:50:40.089 13:12:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:50:40.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:50:40.089 13:12:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:50:40.089 13:12:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:50:40.089 13:12:59 -- common/autotest_common.sh@10 -- # set +x 00:50:40.089 [2024-07-22 13:12:59.318015] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:50:40.089 [2024-07-22 13:12:59.318104] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:50:40.089 [2024-07-22 13:12:59.459056] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:50:40.347 [2024-07-22 13:12:59.530148] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:50:40.347 [2024-07-22 13:12:59.530311] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:50:40.347 [2024-07-22 13:12:59.530327] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:50:40.347 [2024-07-22 13:12:59.530338] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:50:40.347 [2024-07-22 13:12:59.530373] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:50:40.913 13:13:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:50:40.913 13:13:00 -- common/autotest_common.sh@852 -- # return 0 00:50:40.913 13:13:00 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:50:40.913 13:13:00 -- common/autotest_common.sh@718 -- # xtrace_disable 00:50:40.913 13:13:00 -- common/autotest_common.sh@10 -- # set +x 00:50:41.172 13:13:00 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:50:41.172 13:13:00 -- target/dif.sh@139 -- # create_transport 00:50:41.172 13:13:00 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:50:41.172 13:13:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:50:41.172 13:13:00 -- common/autotest_common.sh@10 -- # set +x 00:50:41.172 [2024-07-22 13:13:00.358298] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:50:41.172 13:13:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:50:41.172 13:13:00 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:50:41.172 13:13:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:50:41.172 13:13:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:50:41.172 13:13:00 -- common/autotest_common.sh@10 -- # set +x 00:50:41.172 ************************************ 00:50:41.172 START TEST fio_dif_1_default 00:50:41.172 ************************************ 00:50:41.172 13:13:00 -- common/autotest_common.sh@1104 -- # fio_dif_1 00:50:41.172 13:13:00 -- target/dif.sh@86 -- # create_subsystems 0 00:50:41.172 13:13:00 -- target/dif.sh@28 -- # local sub 00:50:41.172 13:13:00 -- target/dif.sh@30 -- # for sub in "$@" 00:50:41.172 13:13:00 -- target/dif.sh@31 -- # create_subsystem 0 00:50:41.172 13:13:00 -- target/dif.sh@18 -- # local sub_id=0 00:50:41.172 13:13:00 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:50:41.172 13:13:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:50:41.172 13:13:00 -- common/autotest_common.sh@10 -- # set +x 00:50:41.172 bdev_null0 00:50:41.172 13:13:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:50:41.172 13:13:00 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:50:41.172 13:13:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:50:41.172 13:13:00 -- common/autotest_common.sh@10 -- # set +x 00:50:41.172 13:13:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:50:41.172 13:13:00 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:50:41.172 13:13:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:50:41.172 13:13:00 -- common/autotest_common.sh@10 -- # set +x 00:50:41.172 13:13:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:50:41.172 13:13:00 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:50:41.172 13:13:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:50:41.172 13:13:00 -- common/autotest_common.sh@10 -- # set +x 00:50:41.172 [2024-07-22 13:13:00.406414] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:50:41.172 13:13:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:50:41.172 13:13:00 -- target/dif.sh@87 -- # fio /dev/fd/62 00:50:41.172 13:13:00 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:50:41.172 13:13:00 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:50:41.172 13:13:00 -- nvmf/common.sh@520 -- # config=() 00:50:41.172 13:13:00 -- nvmf/common.sh@520 -- # local subsystem config 00:50:41.172 13:13:00 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:50:41.172 13:13:00 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:50:41.172 { 00:50:41.172 "params": { 00:50:41.172 "name": "Nvme$subsystem", 00:50:41.172 "trtype": "$TEST_TRANSPORT", 00:50:41.172 "traddr": "$NVMF_FIRST_TARGET_IP", 00:50:41.172 "adrfam": "ipv4", 00:50:41.172 "trsvcid": "$NVMF_PORT", 00:50:41.172 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:50:41.172 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:50:41.172 "hdgst": ${hdgst:-false}, 00:50:41.172 "ddgst": ${ddgst:-false} 00:50:41.172 }, 00:50:41.172 "method": "bdev_nvme_attach_controller" 00:50:41.172 } 00:50:41.172 EOF 00:50:41.172 )") 00:50:41.172 13:13:00 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:50:41.172 13:13:00 -- target/dif.sh@82 -- # gen_fio_conf 00:50:41.172 13:13:00 -- target/dif.sh@54 -- # local file 00:50:41.172 13:13:00 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:50:41.172 13:13:00 -- target/dif.sh@56 -- # cat 00:50:41.172 13:13:00 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:50:41.172 13:13:00 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:50:41.172 13:13:00 -- common/autotest_common.sh@1318 -- # local sanitizers 00:50:41.172 13:13:00 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:50:41.172 13:13:00 -- common/autotest_common.sh@1320 -- # shift 00:50:41.172 13:13:00 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:50:41.172 13:13:00 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:50:41.172 13:13:00 -- nvmf/common.sh@542 -- # cat 00:50:41.172 13:13:00 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:50:41.172 13:13:00 -- common/autotest_common.sh@1324 -- # grep libasan 00:50:41.172 13:13:00 -- target/dif.sh@72 -- # (( file = 1 )) 00:50:41.172 13:13:00 -- target/dif.sh@72 -- # (( file <= files )) 00:50:41.172 13:13:00 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:50:41.172 13:13:00 -- nvmf/common.sh@544 -- # jq . 00:50:41.172 13:13:00 -- nvmf/common.sh@545 -- # IFS=, 00:50:41.172 13:13:00 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:50:41.172 "params": { 00:50:41.172 "name": "Nvme0", 00:50:41.172 "trtype": "tcp", 00:50:41.172 "traddr": "10.0.0.2", 00:50:41.172 "adrfam": "ipv4", 00:50:41.172 "trsvcid": "4420", 00:50:41.172 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:50:41.172 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:50:41.172 "hdgst": false, 00:50:41.172 "ddgst": false 00:50:41.172 }, 00:50:41.172 "method": "bdev_nvme_attach_controller" 00:50:41.172 }' 00:50:41.172 13:13:00 -- common/autotest_common.sh@1324 -- # asan_lib= 00:50:41.172 13:13:00 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:50:41.172 13:13:00 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:50:41.172 13:13:00 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:50:41.172 13:13:00 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:50:41.172 13:13:00 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:50:41.172 13:13:00 -- common/autotest_common.sh@1324 -- # asan_lib= 00:50:41.172 13:13:00 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:50:41.172 13:13:00 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:50:41.172 13:13:00 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:50:41.431 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:50:41.431 fio-3.35 00:50:41.431 Starting 1 thread 00:50:41.690 [2024-07-22 13:13:01.029740] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:50:41.690 [2024-07-22 13:13:01.029833] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:50:53.890 00:50:53.890 filename0: (groupid=0, jobs=1): err= 0: pid=101311: Mon Jul 22 13:13:11 2024 00:50:53.890 read: IOPS=1259, BW=5038KiB/s (5159kB/s)(49.2MiB/10003msec) 00:50:53.890 slat (nsec): min=5796, max=58598, avg=8075.58, stdev=4023.76 00:50:53.890 clat (usec): min=340, max=42564, avg=3151.47, stdev=10124.48 00:50:53.890 lat (usec): min=346, max=42573, avg=3159.54, stdev=10124.56 00:50:53.890 clat percentiles (usec): 00:50:53.890 | 1.00th=[ 347], 5.00th=[ 359], 10.00th=[ 371], 20.00th=[ 388], 00:50:53.890 | 30.00th=[ 400], 40.00th=[ 416], 50.00th=[ 433], 60.00th=[ 453], 00:50:53.890 | 70.00th=[ 478], 80.00th=[ 506], 90.00th=[ 553], 95.00th=[40633], 00:50:53.890 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[42730], 00:50:53.890 | 99.99th=[42730] 00:50:53.890 bw ( KiB/s): min= 2272, max= 7520, per=100.00%, avg=5156.68, stdev=1405.11, samples=19 00:50:53.890 iops : min= 568, max= 1880, avg=1289.16, stdev=351.30, samples=19 00:50:53.890 lat (usec) : 500=78.26%, 750=14.98%, 1000=0.03% 00:50:53.890 lat (msec) : 10=0.03%, 50=6.70% 00:50:53.890 cpu : usr=91.48%, sys=7.78%, ctx=25, majf=0, minf=0 00:50:53.890 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:50:53.890 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:50:53.890 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:50:53.890 issued rwts: total=12600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:50:53.890 latency : target=0, window=0, percentile=100.00%, depth=4 00:50:53.890 00:50:53.891 Run status group 0 (all jobs): 00:50:53.891 READ: bw=5038KiB/s (5159kB/s), 5038KiB/s-5038KiB/s (5159kB/s-5159kB/s), io=49.2MiB (51.6MB), run=10003-10003msec 00:50:53.891 13:13:11 -- target/dif.sh@88 -- # destroy_subsystems 0 00:50:53.891 13:13:11 -- target/dif.sh@43 -- # local sub 00:50:53.891 13:13:11 -- target/dif.sh@45 -- # for sub in "$@" 00:50:53.891 13:13:11 -- target/dif.sh@46 -- # destroy_subsystem 0 00:50:53.891 13:13:11 -- target/dif.sh@36 -- # local sub_id=0 00:50:53.891 13:13:11 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:50:53.891 13:13:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:50:53.891 13:13:11 -- common/autotest_common.sh@10 -- # set +x 00:50:53.891 13:13:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:50:53.891 13:13:11 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:50:53.891 13:13:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:50:53.891 13:13:11 -- common/autotest_common.sh@10 -- # set +x 00:50:53.891 13:13:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:50:53.891 00:50:53.891 real 0m10.994s 00:50:53.891 user 0m9.784s 00:50:53.891 sys 0m1.046s 00:50:53.891 13:13:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:50:53.891 13:13:11 -- common/autotest_common.sh@10 -- # set +x 00:50:53.891 ************************************ 00:50:53.891 END TEST fio_dif_1_default 00:50:53.891 ************************************ 00:50:53.891 13:13:11 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:50:53.891 13:13:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:50:53.891 13:13:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:50:53.891 13:13:11 -- common/autotest_common.sh@10 -- # set +x 00:50:53.891 ************************************ 00:50:53.891 START TEST fio_dif_1_multi_subsystems 00:50:53.891 ************************************ 00:50:53.891 13:13:11 -- common/autotest_common.sh@1104 -- # fio_dif_1_multi_subsystems 00:50:53.891 13:13:11 -- target/dif.sh@92 -- # local files=1 00:50:53.891 13:13:11 -- target/dif.sh@94 -- # create_subsystems 0 1 00:50:53.891 13:13:11 -- target/dif.sh@28 -- # local sub 00:50:53.891 13:13:11 -- target/dif.sh@30 -- # for sub in "$@" 00:50:53.891 13:13:11 -- target/dif.sh@31 -- # create_subsystem 0 00:50:53.891 13:13:11 -- target/dif.sh@18 -- # local sub_id=0 00:50:53.891 13:13:11 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:50:53.891 13:13:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:50:53.891 13:13:11 -- common/autotest_common.sh@10 -- # set +x 00:50:53.891 bdev_null0 00:50:53.891 13:13:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:50:53.891 13:13:11 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:50:53.891 13:13:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:50:53.891 13:13:11 -- common/autotest_common.sh@10 -- # set +x 00:50:53.891 13:13:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:50:53.891 13:13:11 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:50:53.891 13:13:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:50:53.891 13:13:11 -- common/autotest_common.sh@10 -- # set +x 00:50:53.891 13:13:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:50:53.891 13:13:11 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:50:53.891 13:13:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:50:53.891 13:13:11 -- common/autotest_common.sh@10 -- # set +x 00:50:53.891 [2024-07-22 13:13:11.457111] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:50:53.891 13:13:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:50:53.891 13:13:11 -- target/dif.sh@30 -- # for sub in "$@" 00:50:53.891 13:13:11 -- target/dif.sh@31 -- # create_subsystem 1 00:50:53.891 13:13:11 -- target/dif.sh@18 -- # local sub_id=1 00:50:53.891 13:13:11 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:50:53.891 13:13:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:50:53.891 13:13:11 -- common/autotest_common.sh@10 -- # set +x 00:50:53.891 bdev_null1 00:50:53.891 13:13:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:50:53.891 13:13:11 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:50:53.891 13:13:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:50:53.891 13:13:11 -- common/autotest_common.sh@10 -- # set +x 00:50:53.891 13:13:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:50:53.891 13:13:11 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:50:53.891 13:13:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:50:53.891 13:13:11 -- common/autotest_common.sh@10 -- # set +x 00:50:53.891 13:13:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:50:53.891 13:13:11 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:50:53.891 13:13:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:50:53.891 13:13:11 -- common/autotest_common.sh@10 -- # set +x 00:50:53.891 13:13:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:50:53.891 13:13:11 -- target/dif.sh@95 -- # fio /dev/fd/62 00:50:53.891 13:13:11 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:50:53.891 13:13:11 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:50:53.891 13:13:11 -- nvmf/common.sh@520 -- # config=() 00:50:53.891 13:13:11 -- nvmf/common.sh@520 -- # local subsystem config 00:50:53.891 13:13:11 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:50:53.891 13:13:11 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:50:53.891 { 00:50:53.891 "params": { 00:50:53.891 "name": "Nvme$subsystem", 00:50:53.891 "trtype": "$TEST_TRANSPORT", 00:50:53.891 "traddr": "$NVMF_FIRST_TARGET_IP", 00:50:53.891 "adrfam": "ipv4", 00:50:53.891 "trsvcid": "$NVMF_PORT", 00:50:53.891 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:50:53.891 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:50:53.891 "hdgst": ${hdgst:-false}, 00:50:53.891 "ddgst": ${ddgst:-false} 00:50:53.891 }, 00:50:53.891 "method": "bdev_nvme_attach_controller" 00:50:53.891 } 00:50:53.891 EOF 00:50:53.891 )") 00:50:53.891 13:13:11 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:50:53.891 13:13:11 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:50:53.891 13:13:11 -- target/dif.sh@82 -- # gen_fio_conf 00:50:53.891 13:13:11 -- target/dif.sh@54 -- # local file 00:50:53.891 13:13:11 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:50:53.891 13:13:11 -- target/dif.sh@56 -- # cat 00:50:53.891 13:13:11 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:50:53.891 13:13:11 -- common/autotest_common.sh@1318 -- # local sanitizers 00:50:53.891 13:13:11 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:50:53.891 13:13:11 -- common/autotest_common.sh@1320 -- # shift 00:50:53.891 13:13:11 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:50:53.891 13:13:11 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:50:53.891 13:13:11 -- nvmf/common.sh@542 -- # cat 00:50:53.891 13:13:11 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:50:53.891 13:13:11 -- common/autotest_common.sh@1324 -- # grep libasan 00:50:53.891 13:13:11 -- target/dif.sh@72 -- # (( file = 1 )) 00:50:53.891 13:13:11 -- target/dif.sh@72 -- # (( file <= files )) 00:50:53.891 13:13:11 -- target/dif.sh@73 -- # cat 00:50:53.891 13:13:11 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:50:53.891 13:13:11 -- target/dif.sh@72 -- # (( file++ )) 00:50:53.891 13:13:11 -- target/dif.sh@72 -- # (( file <= files )) 00:50:53.891 13:13:11 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:50:53.891 13:13:11 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:50:53.891 { 00:50:53.891 "params": { 00:50:53.891 "name": "Nvme$subsystem", 00:50:53.891 "trtype": "$TEST_TRANSPORT", 00:50:53.891 "traddr": "$NVMF_FIRST_TARGET_IP", 00:50:53.891 "adrfam": "ipv4", 00:50:53.891 "trsvcid": "$NVMF_PORT", 00:50:53.891 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:50:53.891 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:50:53.891 "hdgst": ${hdgst:-false}, 00:50:53.891 "ddgst": ${ddgst:-false} 00:50:53.891 }, 00:50:53.891 "method": "bdev_nvme_attach_controller" 00:50:53.891 } 00:50:53.891 EOF 00:50:53.891 )") 00:50:53.891 13:13:11 -- nvmf/common.sh@542 -- # cat 00:50:53.891 13:13:11 -- nvmf/common.sh@544 -- # jq . 00:50:53.891 13:13:11 -- nvmf/common.sh@545 -- # IFS=, 00:50:53.891 13:13:11 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:50:53.891 "params": { 00:50:53.891 "name": "Nvme0", 00:50:53.892 "trtype": "tcp", 00:50:53.892 "traddr": "10.0.0.2", 00:50:53.892 "adrfam": "ipv4", 00:50:53.892 "trsvcid": "4420", 00:50:53.892 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:50:53.892 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:50:53.892 "hdgst": false, 00:50:53.892 "ddgst": false 00:50:53.892 }, 00:50:53.892 "method": "bdev_nvme_attach_controller" 00:50:53.892 },{ 00:50:53.892 "params": { 00:50:53.892 "name": "Nvme1", 00:50:53.892 "trtype": "tcp", 00:50:53.892 "traddr": "10.0.0.2", 00:50:53.892 "adrfam": "ipv4", 00:50:53.892 "trsvcid": "4420", 00:50:53.892 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:50:53.892 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:50:53.892 "hdgst": false, 00:50:53.892 "ddgst": false 00:50:53.892 }, 00:50:53.892 "method": "bdev_nvme_attach_controller" 00:50:53.892 }' 00:50:53.892 13:13:11 -- common/autotest_common.sh@1324 -- # asan_lib= 00:50:53.892 13:13:11 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:50:53.892 13:13:11 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:50:53.892 13:13:11 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:50:53.892 13:13:11 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:50:53.892 13:13:11 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:50:53.892 13:13:11 -- common/autotest_common.sh@1324 -- # asan_lib= 00:50:53.892 13:13:11 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:50:53.892 13:13:11 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:50:53.892 13:13:11 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:50:53.892 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:50:53.892 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:50:53.892 fio-3.35 00:50:53.892 Starting 2 threads 00:50:53.892 [2024-07-22 13:13:12.205466] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:50:53.892 [2024-07-22 13:13:12.205526] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:51:03.912 00:51:03.912 filename0: (groupid=0, jobs=1): err= 0: pid=101470: Mon Jul 22 13:13:22 2024 00:51:03.912 read: IOPS=168, BW=674KiB/s (690kB/s)(6768KiB/10037msec) 00:51:03.912 slat (nsec): min=6239, max=34398, avg=8285.24, stdev=3308.71 00:51:03.912 clat (usec): min=366, max=41940, avg=23701.92, stdev=20010.58 00:51:03.912 lat (usec): min=372, max=41951, avg=23710.21, stdev=20010.43 00:51:03.912 clat percentiles (usec): 00:51:03.912 | 1.00th=[ 375], 5.00th=[ 392], 10.00th=[ 408], 20.00th=[ 433], 00:51:03.912 | 30.00th=[ 457], 40.00th=[ 519], 50.00th=[40633], 60.00th=[41157], 00:51:03.912 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:51:03.912 | 99.00th=[41157], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:51:03.912 | 99.99th=[41681] 00:51:03.912 bw ( KiB/s): min= 448, max= 896, per=50.17%, avg=675.20, stdev=131.69, samples=20 00:51:03.912 iops : min= 112, max= 224, avg=168.80, stdev=32.92, samples=20 00:51:03.912 lat (usec) : 500=38.24%, 750=4.08% 00:51:03.912 lat (msec) : 4=0.06%, 10=0.18%, 50=57.45% 00:51:03.912 cpu : usr=95.74%, sys=3.92%, ctx=19, majf=0, minf=9 00:51:03.912 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:51:03.912 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:51:03.912 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:51:03.912 issued rwts: total=1692,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:51:03.912 latency : target=0, window=0, percentile=100.00%, depth=4 00:51:03.912 filename1: (groupid=0, jobs=1): err= 0: pid=101471: Mon Jul 22 13:13:22 2024 00:51:03.912 read: IOPS=167, BW=671KiB/s (688kB/s)(6736KiB/10032msec) 00:51:03.912 slat (nsec): min=6212, max=40208, avg=8321.12, stdev=3281.17 00:51:03.912 clat (usec): min=366, max=42559, avg=23803.09, stdev=19985.64 00:51:03.912 lat (usec): min=372, max=42569, avg=23811.41, stdev=19985.67 00:51:03.912 clat percentiles (usec): 00:51:03.912 | 1.00th=[ 379], 5.00th=[ 396], 10.00th=[ 408], 20.00th=[ 433], 00:51:03.912 | 30.00th=[ 457], 40.00th=[ 519], 50.00th=[40633], 60.00th=[40633], 00:51:03.912 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:51:03.912 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42730], 99.95th=[42730], 00:51:03.912 | 99.99th=[42730] 00:51:03.912 bw ( KiB/s): min= 480, max= 896, per=49.95%, avg=672.00, stdev=121.52, samples=20 00:51:03.912 iops : min= 120, max= 224, avg=168.00, stdev=30.38, samples=20 00:51:03.912 lat (usec) : 500=38.12%, 750=3.92% 00:51:03.912 lat (msec) : 10=0.24%, 50=57.72% 00:51:03.912 cpu : usr=95.35%, sys=4.28%, ctx=14, majf=0, minf=8 00:51:03.912 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:51:03.912 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:51:03.912 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:51:03.912 issued rwts: total=1684,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:51:03.912 latency : target=0, window=0, percentile=100.00%, depth=4 00:51:03.912 00:51:03.912 Run status group 0 (all jobs): 00:51:03.912 READ: bw=1345KiB/s (1378kB/s), 671KiB/s-674KiB/s (688kB/s-690kB/s), io=13.2MiB (13.8MB), run=10032-10037msec 00:51:03.912 13:13:22 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:51:03.912 13:13:22 -- target/dif.sh@43 -- # local sub 00:51:03.912 13:13:22 -- target/dif.sh@45 -- # for sub in "$@" 00:51:03.912 13:13:22 -- target/dif.sh@46 -- # destroy_subsystem 0 00:51:03.912 13:13:22 -- target/dif.sh@36 -- # local sub_id=0 00:51:03.912 13:13:22 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:51:03.912 13:13:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:51:03.912 13:13:22 -- common/autotest_common.sh@10 -- # set +x 00:51:03.912 13:13:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:51:03.912 13:13:22 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:51:03.912 13:13:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:51:03.912 13:13:22 -- common/autotest_common.sh@10 -- # set +x 00:51:03.912 13:13:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:51:03.912 13:13:22 -- target/dif.sh@45 -- # for sub in "$@" 00:51:03.912 13:13:22 -- target/dif.sh@46 -- # destroy_subsystem 1 00:51:03.912 13:13:22 -- target/dif.sh@36 -- # local sub_id=1 00:51:03.912 13:13:22 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:51:03.912 13:13:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:51:03.912 13:13:22 -- common/autotest_common.sh@10 -- # set +x 00:51:03.912 13:13:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:51:03.912 13:13:22 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:51:03.912 13:13:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:51:03.912 13:13:22 -- common/autotest_common.sh@10 -- # set +x 00:51:03.912 13:13:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:51:03.912 00:51:03.912 real 0m11.161s 00:51:03.912 user 0m19.920s 00:51:03.912 sys 0m1.108s 00:51:03.912 13:13:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:51:03.912 ************************************ 00:51:03.912 END TEST fio_dif_1_multi_subsystems 00:51:03.912 ************************************ 00:51:03.912 13:13:22 -- common/autotest_common.sh@10 -- # set +x 00:51:03.912 13:13:22 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:51:03.912 13:13:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:51:03.912 13:13:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:51:03.912 13:13:22 -- common/autotest_common.sh@10 -- # set +x 00:51:03.912 ************************************ 00:51:03.912 START TEST fio_dif_rand_params 00:51:03.912 ************************************ 00:51:03.912 13:13:22 -- common/autotest_common.sh@1104 -- # fio_dif_rand_params 00:51:03.912 13:13:22 -- target/dif.sh@100 -- # local NULL_DIF 00:51:03.912 13:13:22 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:51:03.912 13:13:22 -- target/dif.sh@103 -- # NULL_DIF=3 00:51:03.912 13:13:22 -- target/dif.sh@103 -- # bs=128k 00:51:03.912 13:13:22 -- target/dif.sh@103 -- # numjobs=3 00:51:03.912 13:13:22 -- target/dif.sh@103 -- # iodepth=3 00:51:03.912 13:13:22 -- target/dif.sh@103 -- # runtime=5 00:51:03.912 13:13:22 -- target/dif.sh@105 -- # create_subsystems 0 00:51:03.912 13:13:22 -- target/dif.sh@28 -- # local sub 00:51:03.912 13:13:22 -- target/dif.sh@30 -- # for sub in "$@" 00:51:03.912 13:13:22 -- target/dif.sh@31 -- # create_subsystem 0 00:51:03.912 13:13:22 -- target/dif.sh@18 -- # local sub_id=0 00:51:03.912 13:13:22 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:51:03.912 13:13:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:51:03.912 13:13:22 -- common/autotest_common.sh@10 -- # set +x 00:51:03.912 bdev_null0 00:51:03.912 13:13:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:51:03.912 13:13:22 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:51:03.912 13:13:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:51:03.912 13:13:22 -- common/autotest_common.sh@10 -- # set +x 00:51:03.912 13:13:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:51:03.912 13:13:22 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:51:03.912 13:13:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:51:03.912 13:13:22 -- common/autotest_common.sh@10 -- # set +x 00:51:03.913 13:13:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:51:03.913 13:13:22 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:51:03.913 13:13:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:51:03.913 13:13:22 -- common/autotest_common.sh@10 -- # set +x 00:51:03.913 [2024-07-22 13:13:22.672820] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:51:03.913 13:13:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:51:03.913 13:13:22 -- target/dif.sh@106 -- # fio /dev/fd/62 00:51:03.913 13:13:22 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:51:03.913 13:13:22 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:51:03.913 13:13:22 -- nvmf/common.sh@520 -- # config=() 00:51:03.913 13:13:22 -- nvmf/common.sh@520 -- # local subsystem config 00:51:03.913 13:13:22 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:51:03.913 13:13:22 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:51:03.913 13:13:22 -- target/dif.sh@82 -- # gen_fio_conf 00:51:03.913 13:13:22 -- target/dif.sh@54 -- # local file 00:51:03.913 13:13:22 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:51:03.913 { 00:51:03.913 "params": { 00:51:03.913 "name": "Nvme$subsystem", 00:51:03.913 "trtype": "$TEST_TRANSPORT", 00:51:03.913 "traddr": "$NVMF_FIRST_TARGET_IP", 00:51:03.913 "adrfam": "ipv4", 00:51:03.913 "trsvcid": "$NVMF_PORT", 00:51:03.913 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:51:03.913 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:51:03.913 "hdgst": ${hdgst:-false}, 00:51:03.913 "ddgst": ${ddgst:-false} 00:51:03.913 }, 00:51:03.913 "method": "bdev_nvme_attach_controller" 00:51:03.913 } 00:51:03.913 EOF 00:51:03.913 )") 00:51:03.913 13:13:22 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:51:03.913 13:13:22 -- target/dif.sh@56 -- # cat 00:51:03.913 13:13:22 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:51:03.913 13:13:22 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:51:03.913 13:13:22 -- common/autotest_common.sh@1318 -- # local sanitizers 00:51:03.913 13:13:22 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:51:03.913 13:13:22 -- common/autotest_common.sh@1320 -- # shift 00:51:03.913 13:13:22 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:51:03.913 13:13:22 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:51:03.913 13:13:22 -- nvmf/common.sh@542 -- # cat 00:51:03.913 13:13:22 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:51:03.913 13:13:22 -- target/dif.sh@72 -- # (( file = 1 )) 00:51:03.913 13:13:22 -- target/dif.sh@72 -- # (( file <= files )) 00:51:03.913 13:13:22 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:51:03.913 13:13:22 -- common/autotest_common.sh@1324 -- # grep libasan 00:51:03.913 13:13:22 -- nvmf/common.sh@544 -- # jq . 00:51:03.913 13:13:22 -- nvmf/common.sh@545 -- # IFS=, 00:51:03.913 13:13:22 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:51:03.913 "params": { 00:51:03.913 "name": "Nvme0", 00:51:03.913 "trtype": "tcp", 00:51:03.913 "traddr": "10.0.0.2", 00:51:03.913 "adrfam": "ipv4", 00:51:03.913 "trsvcid": "4420", 00:51:03.913 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:51:03.913 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:51:03.913 "hdgst": false, 00:51:03.913 "ddgst": false 00:51:03.913 }, 00:51:03.913 "method": "bdev_nvme_attach_controller" 00:51:03.913 }' 00:51:03.913 13:13:22 -- common/autotest_common.sh@1324 -- # asan_lib= 00:51:03.913 13:13:22 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:51:03.913 13:13:22 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:51:03.913 13:13:22 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:51:03.913 13:13:22 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:51:03.913 13:13:22 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:51:03.913 13:13:22 -- common/autotest_common.sh@1324 -- # asan_lib= 00:51:03.913 13:13:22 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:51:03.913 13:13:22 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:51:03.913 13:13:22 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:51:03.913 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:51:03.913 ... 00:51:03.913 fio-3.35 00:51:03.913 Starting 3 threads 00:51:03.913 [2024-07-22 13:13:23.289317] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:51:03.913 [2024-07-22 13:13:23.289406] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:51:09.175 00:51:09.175 filename0: (groupid=0, jobs=1): err= 0: pid=101627: Mon Jul 22 13:13:28 2024 00:51:09.175 read: IOPS=297, BW=37.2MiB/s (39.0MB/s)(186MiB/5005msec) 00:51:09.175 slat (nsec): min=6710, max=52686, avg=11283.38, stdev=4091.80 00:51:09.175 clat (usec): min=5390, max=52490, avg=10075.11, stdev=3376.56 00:51:09.175 lat (usec): min=5400, max=52501, avg=10086.39, stdev=3376.53 00:51:09.175 clat percentiles (usec): 00:51:09.175 | 1.00th=[ 5932], 5.00th=[ 7308], 10.00th=[ 8586], 20.00th=[ 9241], 00:51:09.175 | 30.00th=[ 9503], 40.00th=[ 9765], 50.00th=[10028], 60.00th=[10159], 00:51:09.175 | 70.00th=[10421], 80.00th=[10683], 90.00th=[11076], 95.00th=[11338], 00:51:09.175 | 99.00th=[12125], 99.50th=[50594], 99.90th=[51643], 99.95th=[52691], 00:51:09.175 | 99.99th=[52691] 00:51:09.175 bw ( KiB/s): min=34560, max=41472, per=38.69%, avg=38016.00, stdev=2176.42, samples=10 00:51:09.175 iops : min= 270, max= 324, avg=297.00, stdev=17.00, samples=10 00:51:09.175 lat (msec) : 10=50.07%, 20=49.33%, 50=0.07%, 100=0.54% 00:51:09.175 cpu : usr=92.21%, sys=6.29%, ctx=10, majf=0, minf=0 00:51:09.175 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:51:09.175 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:51:09.175 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:51:09.175 issued rwts: total=1488,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:51:09.175 latency : target=0, window=0, percentile=100.00%, depth=3 00:51:09.175 filename0: (groupid=0, jobs=1): err= 0: pid=101628: Mon Jul 22 13:13:28 2024 00:51:09.175 read: IOPS=252, BW=31.6MiB/s (33.1MB/s)(158MiB/5003msec) 00:51:09.175 slat (nsec): min=6554, max=62796, avg=11183.87, stdev=5054.02 00:51:09.175 clat (usec): min=5652, max=54252, avg=11843.78, stdev=5022.00 00:51:09.175 lat (usec): min=5662, max=54265, avg=11854.97, stdev=5021.94 00:51:09.175 clat percentiles (usec): 00:51:09.175 | 1.00th=[ 6849], 5.00th=[ 9372], 10.00th=[10028], 20.00th=[10552], 00:51:09.175 | 30.00th=[10945], 40.00th=[11207], 50.00th=[11469], 60.00th=[11731], 00:51:09.175 | 70.00th=[11863], 80.00th=[12125], 90.00th=[12518], 95.00th=[12911], 00:51:09.175 | 99.00th=[51643], 99.50th=[52691], 99.90th=[54264], 99.95th=[54264], 00:51:09.175 | 99.99th=[54264] 00:51:09.175 bw ( KiB/s): min=27648, max=35328, per=32.57%, avg=32000.00, stdev=2641.89, samples=9 00:51:09.175 iops : min= 216, max= 276, avg=250.00, stdev=20.64, samples=9 00:51:09.175 lat (msec) : 10=10.28%, 20=88.30%, 100=1.42% 00:51:09.175 cpu : usr=92.64%, sys=6.00%, ctx=5, majf=0, minf=9 00:51:09.175 IO depths : 1=7.7%, 2=92.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:51:09.175 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:51:09.175 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:51:09.175 issued rwts: total=1265,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:51:09.175 latency : target=0, window=0, percentile=100.00%, depth=3 00:51:09.175 filename0: (groupid=0, jobs=1): err= 0: pid=101629: Mon Jul 22 13:13:28 2024 00:51:09.175 read: IOPS=217, BW=27.2MiB/s (28.5MB/s)(136MiB/5002msec) 00:51:09.175 slat (nsec): min=6493, max=52158, avg=9394.78, stdev=4024.68 00:51:09.175 clat (usec): min=7666, max=16423, avg=13755.20, stdev=1835.37 00:51:09.175 lat (usec): min=7673, max=16451, avg=13764.59, stdev=1835.36 00:51:09.175 clat percentiles (usec): 00:51:09.176 | 1.00th=[ 8225], 5.00th=[ 8848], 10.00th=[10552], 20.00th=[13435], 00:51:09.176 | 30.00th=[13698], 40.00th=[13960], 50.00th=[14222], 60.00th=[14353], 00:51:09.176 | 70.00th=[14615], 80.00th=[15008], 90.00th=[15401], 95.00th=[15533], 00:51:09.176 | 99.00th=[16057], 99.50th=[16188], 99.90th=[16450], 99.95th=[16450], 00:51:09.176 | 99.99th=[16450] 00:51:09.176 bw ( KiB/s): min=25344, max=29952, per=28.40%, avg=27904.00, stdev=1629.17, samples=9 00:51:09.176 iops : min= 198, max= 234, avg=218.00, stdev=12.73, samples=9 00:51:09.176 lat (msec) : 10=9.37%, 20=90.63% 00:51:09.176 cpu : usr=93.90%, sys=4.86%, ctx=13, majf=0, minf=9 00:51:09.176 IO depths : 1=32.9%, 2=67.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:51:09.176 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:51:09.176 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:51:09.176 issued rwts: total=1089,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:51:09.176 latency : target=0, window=0, percentile=100.00%, depth=3 00:51:09.176 00:51:09.176 Run status group 0 (all jobs): 00:51:09.176 READ: bw=96.0MiB/s (101MB/s), 27.2MiB/s-37.2MiB/s (28.5MB/s-39.0MB/s), io=480MiB (504MB), run=5002-5005msec 00:51:09.434 13:13:28 -- target/dif.sh@107 -- # destroy_subsystems 0 00:51:09.434 13:13:28 -- target/dif.sh@43 -- # local sub 00:51:09.434 13:13:28 -- target/dif.sh@45 -- # for sub in "$@" 00:51:09.434 13:13:28 -- target/dif.sh@46 -- # destroy_subsystem 0 00:51:09.434 13:13:28 -- target/dif.sh@36 -- # local sub_id=0 00:51:09.434 13:13:28 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:51:09.434 13:13:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:51:09.434 13:13:28 -- common/autotest_common.sh@10 -- # set +x 00:51:09.434 13:13:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:51:09.434 13:13:28 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:51:09.434 13:13:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:51:09.434 13:13:28 -- common/autotest_common.sh@10 -- # set +x 00:51:09.434 13:13:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:51:09.434 13:13:28 -- target/dif.sh@109 -- # NULL_DIF=2 00:51:09.434 13:13:28 -- target/dif.sh@109 -- # bs=4k 00:51:09.434 13:13:28 -- target/dif.sh@109 -- # numjobs=8 00:51:09.434 13:13:28 -- target/dif.sh@109 -- # iodepth=16 00:51:09.434 13:13:28 -- target/dif.sh@109 -- # runtime= 00:51:09.434 13:13:28 -- target/dif.sh@109 -- # files=2 00:51:09.434 13:13:28 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:51:09.434 13:13:28 -- target/dif.sh@28 -- # local sub 00:51:09.434 13:13:28 -- target/dif.sh@30 -- # for sub in "$@" 00:51:09.434 13:13:28 -- target/dif.sh@31 -- # create_subsystem 0 00:51:09.434 13:13:28 -- target/dif.sh@18 -- # local sub_id=0 00:51:09.434 13:13:28 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:51:09.434 13:13:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:51:09.434 13:13:28 -- common/autotest_common.sh@10 -- # set +x 00:51:09.434 bdev_null0 00:51:09.434 13:13:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:51:09.434 13:13:28 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:51:09.434 13:13:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:51:09.434 13:13:28 -- common/autotest_common.sh@10 -- # set +x 00:51:09.434 13:13:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:51:09.434 13:13:28 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:51:09.434 13:13:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:51:09.434 13:13:28 -- common/autotest_common.sh@10 -- # set +x 00:51:09.434 13:13:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:51:09.434 13:13:28 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:51:09.434 13:13:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:51:09.434 13:13:28 -- common/autotest_common.sh@10 -- # set +x 00:51:09.434 [2024-07-22 13:13:28.659141] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:51:09.434 13:13:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:51:09.434 13:13:28 -- target/dif.sh@30 -- # for sub in "$@" 00:51:09.434 13:13:28 -- target/dif.sh@31 -- # create_subsystem 1 00:51:09.434 13:13:28 -- target/dif.sh@18 -- # local sub_id=1 00:51:09.434 13:13:28 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:51:09.434 13:13:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:51:09.434 13:13:28 -- common/autotest_common.sh@10 -- # set +x 00:51:09.434 bdev_null1 00:51:09.434 13:13:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:51:09.434 13:13:28 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:51:09.434 13:13:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:51:09.434 13:13:28 -- common/autotest_common.sh@10 -- # set +x 00:51:09.434 13:13:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:51:09.434 13:13:28 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:51:09.434 13:13:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:51:09.434 13:13:28 -- common/autotest_common.sh@10 -- # set +x 00:51:09.434 13:13:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:51:09.434 13:13:28 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:51:09.434 13:13:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:51:09.434 13:13:28 -- common/autotest_common.sh@10 -- # set +x 00:51:09.434 13:13:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:51:09.434 13:13:28 -- target/dif.sh@30 -- # for sub in "$@" 00:51:09.434 13:13:28 -- target/dif.sh@31 -- # create_subsystem 2 00:51:09.434 13:13:28 -- target/dif.sh@18 -- # local sub_id=2 00:51:09.434 13:13:28 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:51:09.434 13:13:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:51:09.434 13:13:28 -- common/autotest_common.sh@10 -- # set +x 00:51:09.434 bdev_null2 00:51:09.434 13:13:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:51:09.434 13:13:28 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:51:09.434 13:13:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:51:09.434 13:13:28 -- common/autotest_common.sh@10 -- # set +x 00:51:09.434 13:13:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:51:09.434 13:13:28 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:51:09.434 13:13:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:51:09.434 13:13:28 -- common/autotest_common.sh@10 -- # set +x 00:51:09.434 13:13:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:51:09.434 13:13:28 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:51:09.434 13:13:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:51:09.434 13:13:28 -- common/autotest_common.sh@10 -- # set +x 00:51:09.434 13:13:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:51:09.434 13:13:28 -- target/dif.sh@112 -- # fio /dev/fd/62 00:51:09.434 13:13:28 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:51:09.434 13:13:28 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:51:09.434 13:13:28 -- nvmf/common.sh@520 -- # config=() 00:51:09.434 13:13:28 -- nvmf/common.sh@520 -- # local subsystem config 00:51:09.434 13:13:28 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:51:09.434 13:13:28 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:51:09.434 { 00:51:09.434 "params": { 00:51:09.434 "name": "Nvme$subsystem", 00:51:09.434 "trtype": "$TEST_TRANSPORT", 00:51:09.434 "traddr": "$NVMF_FIRST_TARGET_IP", 00:51:09.434 "adrfam": "ipv4", 00:51:09.434 "trsvcid": "$NVMF_PORT", 00:51:09.434 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:51:09.434 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:51:09.434 "hdgst": ${hdgst:-false}, 00:51:09.434 "ddgst": ${ddgst:-false} 00:51:09.434 }, 00:51:09.434 "method": "bdev_nvme_attach_controller" 00:51:09.434 } 00:51:09.434 EOF 00:51:09.434 )") 00:51:09.434 13:13:28 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:51:09.434 13:13:28 -- target/dif.sh@82 -- # gen_fio_conf 00:51:09.434 13:13:28 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:51:09.434 13:13:28 -- nvmf/common.sh@542 -- # cat 00:51:09.434 13:13:28 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:51:09.434 13:13:28 -- target/dif.sh@54 -- # local file 00:51:09.434 13:13:28 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:51:09.434 13:13:28 -- common/autotest_common.sh@1318 -- # local sanitizers 00:51:09.434 13:13:28 -- target/dif.sh@56 -- # cat 00:51:09.434 13:13:28 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:51:09.434 13:13:28 -- common/autotest_common.sh@1320 -- # shift 00:51:09.434 13:13:28 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:51:09.434 13:13:28 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:51:09.434 13:13:28 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:51:09.434 13:13:28 -- common/autotest_common.sh@1324 -- # grep libasan 00:51:09.434 13:13:28 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:51:09.434 { 00:51:09.434 "params": { 00:51:09.434 "name": "Nvme$subsystem", 00:51:09.434 "trtype": "$TEST_TRANSPORT", 00:51:09.434 "traddr": "$NVMF_FIRST_TARGET_IP", 00:51:09.434 "adrfam": "ipv4", 00:51:09.434 "trsvcid": "$NVMF_PORT", 00:51:09.434 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:51:09.434 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:51:09.434 "hdgst": ${hdgst:-false}, 00:51:09.434 "ddgst": ${ddgst:-false} 00:51:09.434 }, 00:51:09.434 "method": "bdev_nvme_attach_controller" 00:51:09.434 } 00:51:09.434 EOF 00:51:09.434 )") 00:51:09.434 13:13:28 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:51:09.434 13:13:28 -- target/dif.sh@72 -- # (( file = 1 )) 00:51:09.434 13:13:28 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:51:09.434 13:13:28 -- target/dif.sh@72 -- # (( file <= files )) 00:51:09.434 13:13:28 -- nvmf/common.sh@542 -- # cat 00:51:09.434 13:13:28 -- target/dif.sh@73 -- # cat 00:51:09.434 13:13:28 -- target/dif.sh@72 -- # (( file++ )) 00:51:09.434 13:13:28 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:51:09.434 13:13:28 -- target/dif.sh@72 -- # (( file <= files )) 00:51:09.434 13:13:28 -- target/dif.sh@73 -- # cat 00:51:09.434 13:13:28 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:51:09.434 { 00:51:09.434 "params": { 00:51:09.434 "name": "Nvme$subsystem", 00:51:09.434 "trtype": "$TEST_TRANSPORT", 00:51:09.434 "traddr": "$NVMF_FIRST_TARGET_IP", 00:51:09.434 "adrfam": "ipv4", 00:51:09.434 "trsvcid": "$NVMF_PORT", 00:51:09.434 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:51:09.434 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:51:09.434 "hdgst": ${hdgst:-false}, 00:51:09.435 "ddgst": ${ddgst:-false} 00:51:09.435 }, 00:51:09.435 "method": "bdev_nvme_attach_controller" 00:51:09.435 } 00:51:09.435 EOF 00:51:09.435 )") 00:51:09.435 13:13:28 -- nvmf/common.sh@542 -- # cat 00:51:09.435 13:13:28 -- target/dif.sh@72 -- # (( file++ )) 00:51:09.435 13:13:28 -- target/dif.sh@72 -- # (( file <= files )) 00:51:09.435 13:13:28 -- nvmf/common.sh@544 -- # jq . 00:51:09.435 13:13:28 -- nvmf/common.sh@545 -- # IFS=, 00:51:09.435 13:13:28 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:51:09.435 "params": { 00:51:09.435 "name": "Nvme0", 00:51:09.435 "trtype": "tcp", 00:51:09.435 "traddr": "10.0.0.2", 00:51:09.435 "adrfam": "ipv4", 00:51:09.435 "trsvcid": "4420", 00:51:09.435 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:51:09.435 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:51:09.435 "hdgst": false, 00:51:09.435 "ddgst": false 00:51:09.435 }, 00:51:09.435 "method": "bdev_nvme_attach_controller" 00:51:09.435 },{ 00:51:09.435 "params": { 00:51:09.435 "name": "Nvme1", 00:51:09.435 "trtype": "tcp", 00:51:09.435 "traddr": "10.0.0.2", 00:51:09.435 "adrfam": "ipv4", 00:51:09.435 "trsvcid": "4420", 00:51:09.435 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:51:09.435 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:51:09.435 "hdgst": false, 00:51:09.435 "ddgst": false 00:51:09.435 }, 00:51:09.435 "method": "bdev_nvme_attach_controller" 00:51:09.435 },{ 00:51:09.435 "params": { 00:51:09.435 "name": "Nvme2", 00:51:09.435 "trtype": "tcp", 00:51:09.435 "traddr": "10.0.0.2", 00:51:09.435 "adrfam": "ipv4", 00:51:09.435 "trsvcid": "4420", 00:51:09.435 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:51:09.435 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:51:09.435 "hdgst": false, 00:51:09.435 "ddgst": false 00:51:09.435 }, 00:51:09.435 "method": "bdev_nvme_attach_controller" 00:51:09.435 }' 00:51:09.435 13:13:28 -- common/autotest_common.sh@1324 -- # asan_lib= 00:51:09.435 13:13:28 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:51:09.435 13:13:28 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:51:09.435 13:13:28 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:51:09.435 13:13:28 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:51:09.435 13:13:28 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:51:09.435 13:13:28 -- common/autotest_common.sh@1324 -- # asan_lib= 00:51:09.435 13:13:28 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:51:09.435 13:13:28 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:51:09.435 13:13:28 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:51:09.693 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:51:09.693 ... 00:51:09.693 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:51:09.693 ... 00:51:09.693 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:51:09.693 ... 00:51:09.693 fio-3.35 00:51:09.693 Starting 24 threads 00:51:10.257 [2024-07-22 13:13:29.546755] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:51:10.257 [2024-07-22 13:13:29.546819] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:51:22.453 00:51:22.453 filename0: (groupid=0, jobs=1): err= 0: pid=101725: Mon Jul 22 13:13:39 2024 00:51:22.453 read: IOPS=218, BW=872KiB/s (893kB/s)(8736KiB/10016msec) 00:51:22.453 slat (usec): min=7, max=8023, avg=19.52, stdev=257.13 00:51:22.453 clat (msec): min=26, max=145, avg=73.27, stdev=24.16 00:51:22.453 lat (msec): min=26, max=145, avg=73.29, stdev=24.16 00:51:22.453 clat percentiles (msec): 00:51:22.453 | 1.00th=[ 32], 5.00th=[ 39], 10.00th=[ 44], 20.00th=[ 49], 00:51:22.453 | 30.00th=[ 59], 40.00th=[ 66], 50.00th=[ 71], 60.00th=[ 78], 00:51:22.453 | 70.00th=[ 88], 80.00th=[ 96], 90.00th=[ 106], 95.00th=[ 116], 00:51:22.453 | 99.00th=[ 132], 99.50th=[ 132], 99.90th=[ 146], 99.95th=[ 146], 00:51:22.453 | 99.99th=[ 146] 00:51:22.453 bw ( KiB/s): min= 588, max= 1176, per=4.33%, avg=866.60, stdev=196.25, samples=20 00:51:22.453 iops : min= 147, max= 294, avg=216.60, stdev=49.07, samples=20 00:51:22.453 lat (msec) : 50=21.02%, 100=62.68%, 250=16.30% 00:51:22.453 cpu : usr=40.40%, sys=0.99%, ctx=1187, majf=0, minf=9 00:51:22.453 IO depths : 1=1.1%, 2=2.4%, 4=8.4%, 8=75.2%, 16=12.9%, 32=0.0%, >=64=0.0% 00:51:22.453 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:51:22.453 complete : 0=0.0%, 4=89.9%, 8=5.8%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:51:22.453 issued rwts: total=2184,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:51:22.453 latency : target=0, window=0, percentile=100.00%, depth=16 00:51:22.454 filename0: (groupid=0, jobs=1): err= 0: pid=101726: Mon Jul 22 13:13:39 2024 00:51:22.454 read: IOPS=187, BW=750KiB/s (768kB/s)(7524KiB/10027msec) 00:51:22.454 slat (usec): min=3, max=4020, avg=12.63, stdev=92.52 00:51:22.454 clat (msec): min=35, max=182, avg=85.19, stdev=24.36 00:51:22.454 lat (msec): min=35, max=182, avg=85.20, stdev=24.36 00:51:22.454 clat percentiles (msec): 00:51:22.454 | 1.00th=[ 40], 5.00th=[ 48], 10.00th=[ 56], 20.00th=[ 67], 00:51:22.454 | 30.00th=[ 71], 40.00th=[ 78], 50.00th=[ 85], 60.00th=[ 92], 00:51:22.454 | 70.00th=[ 96], 80.00th=[ 103], 90.00th=[ 108], 95.00th=[ 133], 00:51:22.454 | 99.00th=[ 157], 99.50th=[ 165], 99.90th=[ 182], 99.95th=[ 182], 00:51:22.454 | 99.99th=[ 182] 00:51:22.454 bw ( KiB/s): min= 512, max= 1024, per=3.73%, avg=746.00, stdev=132.93, samples=20 00:51:22.454 iops : min= 128, max= 256, avg=186.50, stdev=33.23, samples=20 00:51:22.454 lat (msec) : 50=9.04%, 100=67.52%, 250=23.44% 00:51:22.454 cpu : usr=45.31%, sys=1.08%, ctx=1061, majf=0, minf=9 00:51:22.454 IO depths : 1=3.2%, 2=6.9%, 4=17.4%, 8=63.2%, 16=9.4%, 32=0.0%, >=64=0.0% 00:51:22.454 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:51:22.454 complete : 0=0.0%, 4=91.9%, 8=2.5%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:51:22.454 issued rwts: total=1881,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:51:22.454 latency : target=0, window=0, percentile=100.00%, depth=16 00:51:22.454 filename0: (groupid=0, jobs=1): err= 0: pid=101727: Mon Jul 22 13:13:39 2024 00:51:22.454 read: IOPS=218, BW=874KiB/s (895kB/s)(8784KiB/10048msec) 00:51:22.454 slat (usec): min=5, max=9023, avg=20.70, stdev=276.91 00:51:22.454 clat (msec): min=29, max=150, avg=72.96, stdev=22.44 00:51:22.454 lat (msec): min=29, max=150, avg=72.98, stdev=22.45 00:51:22.454 clat percentiles (msec): 00:51:22.454 | 1.00th=[ 32], 5.00th=[ 37], 10.00th=[ 46], 20.00th=[ 53], 00:51:22.454 | 30.00th=[ 59], 40.00th=[ 67], 50.00th=[ 71], 60.00th=[ 77], 00:51:22.454 | 70.00th=[ 87], 80.00th=[ 94], 90.00th=[ 102], 95.00th=[ 107], 00:51:22.454 | 99.00th=[ 126], 99.50th=[ 136], 99.90th=[ 150], 99.95th=[ 150], 00:51:22.454 | 99.99th=[ 150] 00:51:22.454 bw ( KiB/s): min= 641, max= 1168, per=4.36%, avg=872.25, stdev=163.57, samples=20 00:51:22.454 iops : min= 160, max= 292, avg=217.95, stdev=40.86, samples=20 00:51:22.454 lat (msec) : 50=18.31%, 100=69.54%, 250=12.16% 00:51:22.454 cpu : usr=44.64%, sys=1.17%, ctx=1163, majf=0, minf=9 00:51:22.454 IO depths : 1=1.9%, 2=4.1%, 4=11.1%, 8=71.0%, 16=11.8%, 32=0.0%, >=64=0.0% 00:51:22.454 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:51:22.454 complete : 0=0.0%, 4=90.6%, 8=5.0%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:51:22.454 issued rwts: total=2196,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:51:22.454 latency : target=0, window=0, percentile=100.00%, depth=16 00:51:22.454 filename0: (groupid=0, jobs=1): err= 0: pid=101728: Mon Jul 22 13:13:39 2024 00:51:22.454 read: IOPS=220, BW=880KiB/s (901kB/s)(8848KiB/10051msec) 00:51:22.454 slat (usec): min=4, max=8026, avg=21.47, stdev=295.70 00:51:22.454 clat (msec): min=3, max=155, avg=72.44, stdev=24.98 00:51:22.454 lat (msec): min=3, max=155, avg=72.46, stdev=25.00 00:51:22.454 clat percentiles (msec): 00:51:22.454 | 1.00th=[ 10], 5.00th=[ 36], 10.00th=[ 45], 20.00th=[ 52], 00:51:22.454 | 30.00th=[ 61], 40.00th=[ 64], 50.00th=[ 72], 60.00th=[ 73], 00:51:22.454 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 120], 00:51:22.454 | 99.00th=[ 131], 99.50th=[ 136], 99.90th=[ 144], 99.95th=[ 144], 00:51:22.454 | 99.99th=[ 157] 00:51:22.454 bw ( KiB/s): min= 640, max= 1587, per=4.38%, avg=877.90, stdev=220.49, samples=20 00:51:22.454 iops : min= 160, max= 396, avg=219.35, stdev=54.99, samples=20 00:51:22.454 lat (msec) : 4=0.63%, 10=0.72%, 20=1.54%, 50=14.96%, 100=69.08% 00:51:22.454 lat (msec) : 250=13.07% 00:51:22.454 cpu : usr=34.00%, sys=0.74%, ctx=1057, majf=0, minf=9 00:51:22.454 IO depths : 1=1.0%, 2=2.3%, 4=8.9%, 8=74.9%, 16=12.9%, 32=0.0%, >=64=0.0% 00:51:22.454 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:51:22.454 complete : 0=0.0%, 4=89.9%, 8=5.9%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:51:22.454 issued rwts: total=2212,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:51:22.454 latency : target=0, window=0, percentile=100.00%, depth=16 00:51:22.454 filename0: (groupid=0, jobs=1): err= 0: pid=101729: Mon Jul 22 13:13:39 2024 00:51:22.454 read: IOPS=251, BW=1008KiB/s (1032kB/s)(9.85MiB/10011msec) 00:51:22.454 slat (usec): min=3, max=5995, avg=18.57, stdev=185.99 00:51:22.454 clat (msec): min=2, max=139, avg=63.38, stdev=23.44 00:51:22.454 lat (msec): min=2, max=139, avg=63.40, stdev=23.45 00:51:22.454 clat percentiles (msec): 00:51:22.454 | 1.00th=[ 5], 5.00th=[ 33], 10.00th=[ 40], 20.00th=[ 45], 00:51:22.454 | 30.00th=[ 50], 40.00th=[ 56], 50.00th=[ 63], 60.00th=[ 68], 00:51:22.454 | 70.00th=[ 72], 80.00th=[ 82], 90.00th=[ 94], 95.00th=[ 105], 00:51:22.454 | 99.00th=[ 133], 99.50th=[ 136], 99.90th=[ 140], 99.95th=[ 140], 00:51:22.454 | 99.99th=[ 140] 00:51:22.454 bw ( KiB/s): min= 640, max= 1744, per=5.04%, avg=1008.89, stdev=256.89, samples=19 00:51:22.454 iops : min= 160, max= 436, avg=252.11, stdev=64.24, samples=19 00:51:22.454 lat (msec) : 4=0.63%, 10=1.82%, 20=0.71%, 50=28.51%, 100=60.86% 00:51:22.454 lat (msec) : 250=7.45% 00:51:22.454 cpu : usr=44.83%, sys=1.04%, ctx=1236, majf=0, minf=9 00:51:22.454 IO depths : 1=0.8%, 2=1.7%, 4=8.8%, 8=76.1%, 16=12.6%, 32=0.0%, >=64=0.0% 00:51:22.454 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:51:22.454 complete : 0=0.0%, 4=89.5%, 8=5.9%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:51:22.454 issued rwts: total=2522,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:51:22.454 latency : target=0, window=0, percentile=100.00%, depth=16 00:51:22.454 filename0: (groupid=0, jobs=1): err= 0: pid=101730: Mon Jul 22 13:13:39 2024 00:51:22.454 read: IOPS=197, BW=789KiB/s (808kB/s)(7912KiB/10030msec) 00:51:22.454 slat (nsec): min=5002, max=30909, avg=10152.74, stdev=3409.94 00:51:22.454 clat (msec): min=35, max=168, avg=81.06, stdev=24.23 00:51:22.454 lat (msec): min=35, max=168, avg=81.07, stdev=24.23 00:51:22.454 clat percentiles (msec): 00:51:22.454 | 1.00th=[ 45], 5.00th=[ 48], 10.00th=[ 51], 20.00th=[ 61], 00:51:22.454 | 30.00th=[ 63], 40.00th=[ 72], 50.00th=[ 74], 60.00th=[ 85], 00:51:22.454 | 70.00th=[ 96], 80.00th=[ 101], 90.00th=[ 110], 95.00th=[ 125], 00:51:22.454 | 99.00th=[ 146], 99.50th=[ 167], 99.90th=[ 169], 99.95th=[ 169], 00:51:22.454 | 99.99th=[ 169] 00:51:22.454 bw ( KiB/s): min= 512, max= 992, per=3.92%, avg=784.30, stdev=156.53, samples=20 00:51:22.454 iops : min= 128, max= 248, avg=196.05, stdev=39.11, samples=20 00:51:22.454 lat (msec) : 50=10.36%, 100=69.67%, 250=19.97% 00:51:22.454 cpu : usr=32.63%, sys=0.60%, ctx=899, majf=0, minf=9 00:51:22.454 IO depths : 1=1.7%, 2=3.7%, 4=11.7%, 8=71.3%, 16=11.6%, 32=0.0%, >=64=0.0% 00:51:22.454 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:51:22.454 complete : 0=0.0%, 4=90.5%, 8=4.6%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:51:22.454 issued rwts: total=1978,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:51:22.454 latency : target=0, window=0, percentile=100.00%, depth=16 00:51:22.454 filename0: (groupid=0, jobs=1): err= 0: pid=101731: Mon Jul 22 13:13:39 2024 00:51:22.454 read: IOPS=200, BW=803KiB/s (823kB/s)(8064KiB/10037msec) 00:51:22.454 slat (usec): min=7, max=4051, avg=13.07, stdev=90.12 00:51:22.454 clat (msec): min=25, max=158, avg=79.49, stdev=28.85 00:51:22.454 lat (msec): min=25, max=158, avg=79.51, stdev=28.85 00:51:22.454 clat percentiles (msec): 00:51:22.454 | 1.00th=[ 31], 5.00th=[ 40], 10.00th=[ 46], 20.00th=[ 52], 00:51:22.454 | 30.00th=[ 61], 40.00th=[ 67], 50.00th=[ 75], 60.00th=[ 86], 00:51:22.454 | 70.00th=[ 96], 80.00th=[ 108], 90.00th=[ 121], 95.00th=[ 132], 00:51:22.454 | 99.00th=[ 146], 99.50th=[ 157], 99.90th=[ 159], 99.95th=[ 159], 00:51:22.454 | 99.99th=[ 159] 00:51:22.454 bw ( KiB/s): min= 512, max= 1248, per=4.00%, avg=800.85, stdev=242.98, samples=20 00:51:22.454 iops : min= 128, max= 312, avg=200.15, stdev=60.78, samples=20 00:51:22.454 lat (msec) : 50=17.86%, 100=57.29%, 250=24.85% 00:51:22.454 cpu : usr=36.78%, sys=0.90%, ctx=1481, majf=0, minf=9 00:51:22.454 IO depths : 1=0.7%, 2=1.6%, 4=8.0%, 8=76.4%, 16=13.3%, 32=0.0%, >=64=0.0% 00:51:22.454 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:51:22.454 complete : 0=0.0%, 4=89.4%, 8=6.6%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:51:22.454 issued rwts: total=2016,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:51:22.454 latency : target=0, window=0, percentile=100.00%, depth=16 00:51:22.454 filename0: (groupid=0, jobs=1): err= 0: pid=101732: Mon Jul 22 13:13:39 2024 00:51:22.454 read: IOPS=223, BW=895KiB/s (917kB/s)(8984KiB/10037msec) 00:51:22.454 slat (usec): min=3, max=8022, avg=19.25, stdev=223.95 00:51:22.454 clat (msec): min=25, max=144, avg=71.37, stdev=22.34 00:51:22.454 lat (msec): min=26, max=144, avg=71.39, stdev=22.34 00:51:22.454 clat percentiles (msec): 00:51:22.454 | 1.00th=[ 35], 5.00th=[ 40], 10.00th=[ 44], 20.00th=[ 53], 00:51:22.454 | 30.00th=[ 61], 40.00th=[ 64], 50.00th=[ 68], 60.00th=[ 71], 00:51:22.454 | 70.00th=[ 80], 80.00th=[ 92], 90.00th=[ 105], 95.00th=[ 117], 00:51:22.454 | 99.00th=[ 130], 99.50th=[ 132], 99.90th=[ 146], 99.95th=[ 146], 00:51:22.454 | 99.99th=[ 146] 00:51:22.454 bw ( KiB/s): min= 523, max= 1258, per=4.45%, avg=891.35, stdev=177.09, samples=20 00:51:22.454 iops : min= 130, max= 314, avg=222.75, stdev=44.29, samples=20 00:51:22.454 lat (msec) : 50=16.56%, 100=70.93%, 250=12.51% 00:51:22.454 cpu : usr=44.73%, sys=1.01%, ctx=1292, majf=0, minf=9 00:51:22.454 IO depths : 1=1.5%, 2=3.3%, 4=10.4%, 8=72.9%, 16=11.8%, 32=0.0%, >=64=0.0% 00:51:22.454 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:51:22.454 complete : 0=0.0%, 4=90.3%, 8=4.9%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:51:22.454 issued rwts: total=2246,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:51:22.454 latency : target=0, window=0, percentile=100.00%, depth=16 00:51:22.454 filename1: (groupid=0, jobs=1): err= 0: pid=101733: Mon Jul 22 13:13:39 2024 00:51:22.454 read: IOPS=179, BW=716KiB/s (733kB/s)(7168KiB/10008msec) 00:51:22.454 slat (usec): min=3, max=8016, avg=17.09, stdev=211.47 00:51:22.454 clat (msec): min=29, max=173, avg=89.25, stdev=25.02 00:51:22.454 lat (msec): min=29, max=173, avg=89.27, stdev=25.02 00:51:22.454 clat percentiles (msec): 00:51:22.454 | 1.00th=[ 44], 5.00th=[ 47], 10.00th=[ 61], 20.00th=[ 67], 00:51:22.454 | 30.00th=[ 72], 40.00th=[ 84], 50.00th=[ 91], 60.00th=[ 96], 00:51:22.454 | 70.00th=[ 102], 80.00th=[ 107], 90.00th=[ 123], 95.00th=[ 134], 00:51:22.454 | 99.00th=[ 144], 99.50th=[ 169], 99.90th=[ 174], 99.95th=[ 174], 00:51:22.454 | 99.99th=[ 174] 00:51:22.455 bw ( KiB/s): min= 512, max= 944, per=3.55%, avg=710.35, stdev=116.75, samples=20 00:51:22.455 iops : min= 128, max= 236, avg=177.55, stdev=29.19, samples=20 00:51:22.455 lat (msec) : 50=6.64%, 100=62.50%, 250=30.86% 00:51:22.455 cpu : usr=37.97%, sys=0.97%, ctx=1077, majf=0, minf=9 00:51:22.455 IO depths : 1=2.8%, 2=6.4%, 4=17.0%, 8=63.6%, 16=10.2%, 32=0.0%, >=64=0.0% 00:51:22.455 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:51:22.455 complete : 0=0.0%, 4=92.1%, 8=2.5%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:51:22.455 issued rwts: total=1792,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:51:22.455 latency : target=0, window=0, percentile=100.00%, depth=16 00:51:22.455 filename1: (groupid=0, jobs=1): err= 0: pid=101734: Mon Jul 22 13:13:39 2024 00:51:22.455 read: IOPS=184, BW=737KiB/s (754kB/s)(7372KiB/10008msec) 00:51:22.455 slat (usec): min=4, max=8031, avg=19.45, stdev=264.03 00:51:22.455 clat (msec): min=31, max=171, avg=86.72, stdev=24.03 00:51:22.455 lat (msec): min=31, max=171, avg=86.74, stdev=24.03 00:51:22.455 clat percentiles (msec): 00:51:22.455 | 1.00th=[ 44], 5.00th=[ 48], 10.00th=[ 59], 20.00th=[ 64], 00:51:22.455 | 30.00th=[ 72], 40.00th=[ 81], 50.00th=[ 86], 60.00th=[ 96], 00:51:22.455 | 70.00th=[ 100], 80.00th=[ 106], 90.00th=[ 117], 95.00th=[ 131], 00:51:22.455 | 99.00th=[ 155], 99.50th=[ 169], 99.90th=[ 171], 99.95th=[ 171], 00:51:22.455 | 99.99th=[ 171] 00:51:22.455 bw ( KiB/s): min= 512, max= 952, per=3.65%, avg=730.75, stdev=126.00, samples=20 00:51:22.455 iops : min= 128, max= 238, avg=182.65, stdev=31.50, samples=20 00:51:22.455 lat (msec) : 50=7.60%, 100=66.25%, 250=26.15% 00:51:22.455 cpu : usr=34.94%, sys=0.70%, ctx=1067, majf=0, minf=9 00:51:22.455 IO depths : 1=2.2%, 2=5.0%, 4=14.5%, 8=67.8%, 16=10.5%, 32=0.0%, >=64=0.0% 00:51:22.455 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:51:22.455 complete : 0=0.0%, 4=91.1%, 8=3.4%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:51:22.455 issued rwts: total=1843,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:51:22.455 latency : target=0, window=0, percentile=100.00%, depth=16 00:51:22.455 filename1: (groupid=0, jobs=1): err= 0: pid=101735: Mon Jul 22 13:13:39 2024 00:51:22.455 read: IOPS=208, BW=833KiB/s (853kB/s)(8360KiB/10035msec) 00:51:22.455 slat (usec): min=7, max=8017, avg=21.06, stdev=263.77 00:51:22.455 clat (msec): min=26, max=178, avg=76.70, stdev=25.50 00:51:22.455 lat (msec): min=26, max=178, avg=76.72, stdev=25.50 00:51:22.455 clat percentiles (msec): 00:51:22.455 | 1.00th=[ 31], 5.00th=[ 40], 10.00th=[ 48], 20.00th=[ 57], 00:51:22.455 | 30.00th=[ 61], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 80], 00:51:22.455 | 70.00th=[ 88], 80.00th=[ 97], 90.00th=[ 108], 95.00th=[ 123], 00:51:22.455 | 99.00th=[ 140], 99.50th=[ 157], 99.90th=[ 180], 99.95th=[ 180], 00:51:22.455 | 99.99th=[ 180] 00:51:22.455 bw ( KiB/s): min= 512, max= 1120, per=4.14%, avg=828.85, stdev=159.63, samples=20 00:51:22.455 iops : min= 128, max= 280, avg=207.15, stdev=39.94, samples=20 00:51:22.455 lat (msec) : 50=16.65%, 100=65.02%, 250=18.33% 00:51:22.455 cpu : usr=33.87%, sys=0.81%, ctx=1044, majf=0, minf=9 00:51:22.455 IO depths : 1=0.8%, 2=1.6%, 4=7.4%, 8=76.9%, 16=13.3%, 32=0.0%, >=64=0.0% 00:51:22.455 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:51:22.455 complete : 0=0.0%, 4=89.5%, 8=6.4%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:51:22.455 issued rwts: total=2090,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:51:22.455 latency : target=0, window=0, percentile=100.00%, depth=16 00:51:22.455 filename1: (groupid=0, jobs=1): err= 0: pid=101736: Mon Jul 22 13:13:39 2024 00:51:22.455 read: IOPS=223, BW=895KiB/s (917kB/s)(8968KiB/10017msec) 00:51:22.455 slat (usec): min=3, max=8022, avg=21.53, stdev=259.89 00:51:22.455 clat (msec): min=22, max=132, avg=71.34, stdev=22.17 00:51:22.455 lat (msec): min=22, max=132, avg=71.36, stdev=22.18 00:51:22.455 clat percentiles (msec): 00:51:22.455 | 1.00th=[ 34], 5.00th=[ 41], 10.00th=[ 46], 20.00th=[ 51], 00:51:22.455 | 30.00th=[ 58], 40.00th=[ 64], 50.00th=[ 68], 60.00th=[ 73], 00:51:22.455 | 70.00th=[ 83], 80.00th=[ 92], 90.00th=[ 104], 95.00th=[ 113], 00:51:22.455 | 99.00th=[ 128], 99.50th=[ 129], 99.90th=[ 133], 99.95th=[ 133], 00:51:22.455 | 99.99th=[ 133] 00:51:22.455 bw ( KiB/s): min= 641, max= 1152, per=4.46%, avg=892.85, stdev=151.71, samples=20 00:51:22.455 iops : min= 160, max= 288, avg=223.10, stdev=37.94, samples=20 00:51:22.455 lat (msec) : 50=19.40%, 100=67.93%, 250=12.67% 00:51:22.455 cpu : usr=44.22%, sys=0.94%, ctx=1338, majf=0, minf=9 00:51:22.455 IO depths : 1=0.9%, 2=1.9%, 4=8.6%, 8=76.1%, 16=12.5%, 32=0.0%, >=64=0.0% 00:51:22.455 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:51:22.455 complete : 0=0.0%, 4=89.6%, 8=5.7%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:51:22.455 issued rwts: total=2242,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:51:22.455 latency : target=0, window=0, percentile=100.00%, depth=16 00:51:22.455 filename1: (groupid=0, jobs=1): err= 0: pid=101737: Mon Jul 22 13:13:39 2024 00:51:22.455 read: IOPS=190, BW=762KiB/s (780kB/s)(7628KiB/10008msec) 00:51:22.455 slat (usec): min=4, max=8020, avg=18.73, stdev=259.31 00:51:22.455 clat (msec): min=7, max=176, avg=83.84, stdev=25.25 00:51:22.455 lat (msec): min=7, max=176, avg=83.86, stdev=25.25 00:51:22.455 clat percentiles (msec): 00:51:22.455 | 1.00th=[ 36], 5.00th=[ 48], 10.00th=[ 54], 20.00th=[ 62], 00:51:22.455 | 30.00th=[ 71], 40.00th=[ 72], 50.00th=[ 84], 60.00th=[ 88], 00:51:22.455 | 70.00th=[ 96], 80.00th=[ 107], 90.00th=[ 121], 95.00th=[ 131], 00:51:22.455 | 99.00th=[ 144], 99.50th=[ 165], 99.90th=[ 178], 99.95th=[ 178], 00:51:22.455 | 99.99th=[ 178] 00:51:22.455 bw ( KiB/s): min= 512, max= 1024, per=3.75%, avg=750.63, stdev=131.03, samples=19 00:51:22.455 iops : min= 128, max= 256, avg=187.63, stdev=32.75, samples=19 00:51:22.455 lat (msec) : 10=0.37%, 50=8.70%, 100=69.69%, 250=21.24% 00:51:22.455 cpu : usr=32.76%, sys=0.56%, ctx=887, majf=0, minf=9 00:51:22.455 IO depths : 1=1.2%, 2=2.6%, 4=10.3%, 8=73.6%, 16=12.4%, 32=0.0%, >=64=0.0% 00:51:22.455 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:51:22.455 complete : 0=0.0%, 4=89.9%, 8=5.5%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:51:22.455 issued rwts: total=1907,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:51:22.455 latency : target=0, window=0, percentile=100.00%, depth=16 00:51:22.455 filename1: (groupid=0, jobs=1): err= 0: pid=101738: Mon Jul 22 13:13:39 2024 00:51:22.455 read: IOPS=223, BW=894KiB/s (915kB/s)(8956KiB/10020msec) 00:51:22.455 slat (usec): min=3, max=4021, avg=14.10, stdev=119.91 00:51:22.455 clat (msec): min=20, max=171, avg=71.46, stdev=27.59 00:51:22.455 lat (msec): min=20, max=171, avg=71.47, stdev=27.59 00:51:22.455 clat percentiles (msec): 00:51:22.455 | 1.00th=[ 32], 5.00th=[ 37], 10.00th=[ 41], 20.00th=[ 47], 00:51:22.455 | 30.00th=[ 50], 40.00th=[ 57], 50.00th=[ 66], 60.00th=[ 77], 00:51:22.455 | 70.00th=[ 89], 80.00th=[ 97], 90.00th=[ 108], 95.00th=[ 121], 00:51:22.455 | 99.00th=[ 144], 99.50th=[ 155], 99.90th=[ 171], 99.95th=[ 171], 00:51:22.455 | 99.99th=[ 171] 00:51:22.455 bw ( KiB/s): min= 560, max= 1378, per=4.46%, avg=892.95, stdev=268.48, samples=20 00:51:22.455 iops : min= 140, max= 344, avg=223.15, stdev=67.07, samples=20 00:51:22.455 lat (msec) : 50=30.95%, 100=54.04%, 250=15.01% 00:51:22.455 cpu : usr=38.50%, sys=1.00%, ctx=1290, majf=0, minf=9 00:51:22.455 IO depths : 1=0.4%, 2=0.8%, 4=5.9%, 8=78.8%, 16=14.1%, 32=0.0%, >=64=0.0% 00:51:22.455 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:51:22.455 complete : 0=0.0%, 4=89.1%, 8=7.3%, 16=3.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:51:22.455 issued rwts: total=2239,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:51:22.455 latency : target=0, window=0, percentile=100.00%, depth=16 00:51:22.455 filename1: (groupid=0, jobs=1): err= 0: pid=101739: Mon Jul 22 13:13:39 2024 00:51:22.455 read: IOPS=206, BW=825KiB/s (845kB/s)(8260KiB/10011msec) 00:51:22.455 slat (usec): min=7, max=8018, avg=14.71, stdev=176.26 00:51:22.455 clat (msec): min=26, max=188, avg=77.49, stdev=24.78 00:51:22.455 lat (msec): min=26, max=188, avg=77.50, stdev=24.78 00:51:22.455 clat percentiles (msec): 00:51:22.455 | 1.00th=[ 37], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 59], 00:51:22.455 | 30.00th=[ 62], 40.00th=[ 67], 50.00th=[ 72], 60.00th=[ 81], 00:51:22.455 | 70.00th=[ 90], 80.00th=[ 99], 90.00th=[ 108], 95.00th=[ 121], 00:51:22.455 | 99.00th=[ 153], 99.50th=[ 155], 99.90th=[ 188], 99.95th=[ 188], 00:51:22.455 | 99.99th=[ 188] 00:51:22.455 bw ( KiB/s): min= 592, max= 1024, per=4.09%, avg=819.25, stdev=143.63, samples=20 00:51:22.455 iops : min= 148, max= 256, avg=204.80, stdev=35.89, samples=20 00:51:22.455 lat (msec) : 50=13.22%, 100=67.55%, 250=19.23% 00:51:22.455 cpu : usr=34.11%, sys=0.86%, ctx=1003, majf=0, minf=9 00:51:22.455 IO depths : 1=0.8%, 2=1.8%, 4=8.9%, 8=75.4%, 16=13.1%, 32=0.0%, >=64=0.0% 00:51:22.455 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:51:22.455 complete : 0=0.0%, 4=90.1%, 8=5.7%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:51:22.455 issued rwts: total=2065,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:51:22.455 latency : target=0, window=0, percentile=100.00%, depth=16 00:51:22.455 filename1: (groupid=0, jobs=1): err= 0: pid=101740: Mon Jul 22 13:13:39 2024 00:51:22.455 read: IOPS=215, BW=862KiB/s (882kB/s)(8648KiB/10037msec) 00:51:22.455 slat (usec): min=4, max=8016, avg=13.98, stdev=172.21 00:51:22.455 clat (msec): min=27, max=178, avg=74.20, stdev=24.39 00:51:22.455 lat (msec): min=27, max=178, avg=74.22, stdev=24.39 00:51:22.455 clat percentiles (msec): 00:51:22.455 | 1.00th=[ 35], 5.00th=[ 44], 10.00th=[ 48], 20.00th=[ 53], 00:51:22.455 | 30.00th=[ 61], 40.00th=[ 64], 50.00th=[ 72], 60.00th=[ 73], 00:51:22.455 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 105], 95.00th=[ 121], 00:51:22.455 | 99.00th=[ 148], 99.50th=[ 155], 99.90th=[ 178], 99.95th=[ 178], 00:51:22.455 | 99.99th=[ 178] 00:51:22.455 bw ( KiB/s): min= 512, max= 1120, per=4.28%, avg=857.55, stdev=191.38, samples=20 00:51:22.455 iops : min= 128, max= 280, avg=214.35, stdev=47.87, samples=20 00:51:22.455 lat (msec) : 50=18.87%, 100=69.06%, 250=12.07% 00:51:22.455 cpu : usr=33.88%, sys=0.86%, ctx=935, majf=0, minf=9 00:51:22.455 IO depths : 1=0.8%, 2=1.8%, 4=7.5%, 8=76.6%, 16=13.3%, 32=0.0%, >=64=0.0% 00:51:22.455 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:51:22.455 complete : 0=0.0%, 4=89.6%, 8=6.4%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:51:22.455 issued rwts: total=2162,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:51:22.455 latency : target=0, window=0, percentile=100.00%, depth=16 00:51:22.455 filename2: (groupid=0, jobs=1): err= 0: pid=101741: Mon Jul 22 13:13:39 2024 00:51:22.455 read: IOPS=217, BW=870KiB/s (891kB/s)(8732KiB/10032msec) 00:51:22.456 slat (usec): min=4, max=8026, avg=20.71, stdev=296.94 00:51:22.456 clat (msec): min=33, max=147, avg=73.44, stdev=21.75 00:51:22.456 lat (msec): min=33, max=147, avg=73.46, stdev=21.75 00:51:22.456 clat percentiles (msec): 00:51:22.456 | 1.00th=[ 35], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 52], 00:51:22.456 | 30.00th=[ 61], 40.00th=[ 67], 50.00th=[ 72], 60.00th=[ 73], 00:51:22.456 | 70.00th=[ 84], 80.00th=[ 95], 90.00th=[ 106], 95.00th=[ 111], 00:51:22.456 | 99.00th=[ 136], 99.50th=[ 144], 99.90th=[ 148], 99.95th=[ 148], 00:51:22.456 | 99.99th=[ 148] 00:51:22.456 bw ( KiB/s): min= 640, max= 1163, per=4.33%, avg=866.35, stdev=153.42, samples=20 00:51:22.456 iops : min= 160, max= 290, avg=216.55, stdev=38.28, samples=20 00:51:22.456 lat (msec) : 50=17.27%, 100=71.23%, 250=11.50% 00:51:22.456 cpu : usr=33.10%, sys=0.63%, ctx=981, majf=0, minf=9 00:51:22.456 IO depths : 1=0.6%, 2=1.4%, 4=7.8%, 8=77.0%, 16=13.3%, 32=0.0%, >=64=0.0% 00:51:22.456 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:51:22.456 complete : 0=0.0%, 4=89.9%, 8=5.9%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:51:22.456 issued rwts: total=2183,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:51:22.456 latency : target=0, window=0, percentile=100.00%, depth=16 00:51:22.456 filename2: (groupid=0, jobs=1): err= 0: pid=101742: Mon Jul 22 13:13:39 2024 00:51:22.456 read: IOPS=183, BW=736KiB/s (753kB/s)(7372KiB/10023msec) 00:51:22.456 slat (usec): min=4, max=6821, avg=20.87, stdev=220.18 00:51:22.456 clat (msec): min=39, max=164, avg=86.81, stdev=21.70 00:51:22.456 lat (msec): min=39, max=164, avg=86.83, stdev=21.69 00:51:22.456 clat percentiles (msec): 00:51:22.456 | 1.00th=[ 46], 5.00th=[ 57], 10.00th=[ 61], 20.00th=[ 66], 00:51:22.456 | 30.00th=[ 72], 40.00th=[ 81], 50.00th=[ 87], 60.00th=[ 94], 00:51:22.456 | 70.00th=[ 99], 80.00th=[ 105], 90.00th=[ 112], 95.00th=[ 123], 00:51:22.456 | 99.00th=[ 150], 99.50th=[ 161], 99.90th=[ 165], 99.95th=[ 165], 00:51:22.456 | 99.99th=[ 165] 00:51:22.456 bw ( KiB/s): min= 512, max= 928, per=3.65%, avg=730.35, stdev=121.70, samples=20 00:51:22.456 iops : min= 128, max= 232, avg=182.55, stdev=30.41, samples=20 00:51:22.456 lat (msec) : 50=3.20%, 100=72.65%, 250=24.15% 00:51:22.456 cpu : usr=42.48%, sys=1.04%, ctx=1350, majf=0, minf=9 00:51:22.456 IO depths : 1=2.2%, 2=4.9%, 4=14.0%, 8=67.6%, 16=11.3%, 32=0.0%, >=64=0.0% 00:51:22.456 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:51:22.456 complete : 0=0.0%, 4=91.4%, 8=3.8%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:51:22.456 issued rwts: total=1843,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:51:22.456 latency : target=0, window=0, percentile=100.00%, depth=16 00:51:22.456 filename2: (groupid=0, jobs=1): err= 0: pid=101743: Mon Jul 22 13:13:39 2024 00:51:22.456 read: IOPS=193, BW=774KiB/s (793kB/s)(7752KiB/10010msec) 00:51:22.456 slat (nsec): min=7555, max=33094, avg=10292.80, stdev=3514.05 00:51:22.456 clat (msec): min=8, max=160, avg=82.55, stdev=24.02 00:51:22.456 lat (msec): min=8, max=160, avg=82.56, stdev=24.02 00:51:22.456 clat percentiles (msec): 00:51:22.456 | 1.00th=[ 36], 5.00th=[ 47], 10.00th=[ 52], 20.00th=[ 63], 00:51:22.456 | 30.00th=[ 71], 40.00th=[ 72], 50.00th=[ 83], 60.00th=[ 91], 00:51:22.456 | 70.00th=[ 96], 80.00th=[ 103], 90.00th=[ 109], 95.00th=[ 120], 00:51:22.456 | 99.00th=[ 148], 99.50th=[ 153], 99.90th=[ 161], 99.95th=[ 161], 00:51:22.456 | 99.99th=[ 161] 00:51:22.456 bw ( KiB/s): min= 512, max= 1072, per=3.79%, avg=759.89, stdev=159.80, samples=19 00:51:22.456 iops : min= 128, max= 268, avg=189.95, stdev=39.92, samples=19 00:51:22.456 lat (msec) : 10=0.83%, 50=8.77%, 100=69.40%, 250=21.00% 00:51:22.456 cpu : usr=38.03%, sys=0.95%, ctx=998, majf=0, minf=9 00:51:22.456 IO depths : 1=3.0%, 2=6.4%, 4=16.0%, 8=64.7%, 16=9.9%, 32=0.0%, >=64=0.0% 00:51:22.456 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:51:22.456 complete : 0=0.0%, 4=91.7%, 8=3.0%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:51:22.456 issued rwts: total=1938,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:51:22.456 latency : target=0, window=0, percentile=100.00%, depth=16 00:51:22.456 filename2: (groupid=0, jobs=1): err= 0: pid=101744: Mon Jul 22 13:13:39 2024 00:51:22.456 read: IOPS=221, BW=885KiB/s (907kB/s)(8884KiB/10035msec) 00:51:22.456 slat (usec): min=6, max=8027, avg=14.16, stdev=170.15 00:51:22.456 clat (msec): min=21, max=146, avg=72.14, stdev=22.53 00:51:22.456 lat (msec): min=21, max=146, avg=72.15, stdev=22.53 00:51:22.456 clat percentiles (msec): 00:51:22.456 | 1.00th=[ 34], 5.00th=[ 38], 10.00th=[ 47], 20.00th=[ 51], 00:51:22.456 | 30.00th=[ 61], 40.00th=[ 62], 50.00th=[ 71], 60.00th=[ 73], 00:51:22.456 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 104], 95.00th=[ 109], 00:51:22.456 | 99.00th=[ 132], 99.50th=[ 132], 99.90th=[ 146], 99.95th=[ 146], 00:51:22.456 | 99.99th=[ 146] 00:51:22.456 bw ( KiB/s): min= 686, max= 1200, per=4.40%, avg=881.15, stdev=163.06, samples=20 00:51:22.456 iops : min= 171, max= 300, avg=220.25, stdev=40.80, samples=20 00:51:22.456 lat (msec) : 50=19.50%, 100=70.01%, 250=10.49% 00:51:22.456 cpu : usr=32.57%, sys=0.70%, ctx=897, majf=0, minf=9 00:51:22.456 IO depths : 1=0.2%, 2=0.5%, 4=5.9%, 8=79.1%, 16=14.3%, 32=0.0%, >=64=0.0% 00:51:22.456 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:51:22.456 complete : 0=0.0%, 4=89.4%, 8=7.1%, 16=3.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:51:22.456 issued rwts: total=2221,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:51:22.456 latency : target=0, window=0, percentile=100.00%, depth=16 00:51:22.456 filename2: (groupid=0, jobs=1): err= 0: pid=101745: Mon Jul 22 13:13:39 2024 00:51:22.456 read: IOPS=233, BW=934KiB/s (957kB/s)(9380KiB/10041msec) 00:51:22.456 slat (usec): min=3, max=4020, avg=15.30, stdev=143.39 00:51:22.456 clat (msec): min=10, max=173, avg=68.36, stdev=25.93 00:51:22.456 lat (msec): min=10, max=173, avg=68.37, stdev=25.93 00:51:22.456 clat percentiles (msec): 00:51:22.456 | 1.00th=[ 15], 5.00th=[ 36], 10.00th=[ 41], 20.00th=[ 47], 00:51:22.456 | 30.00th=[ 51], 40.00th=[ 56], 50.00th=[ 64], 60.00th=[ 72], 00:51:22.456 | 70.00th=[ 81], 80.00th=[ 96], 90.00th=[ 105], 95.00th=[ 112], 00:51:22.456 | 99.00th=[ 136], 99.50th=[ 140], 99.90th=[ 174], 99.95th=[ 174], 00:51:22.456 | 99.99th=[ 174] 00:51:22.456 bw ( KiB/s): min= 512, max= 1384, per=4.65%, avg=931.80, stdev=251.87, samples=20 00:51:22.456 iops : min= 128, max= 346, avg=232.85, stdev=63.00, samples=20 00:51:22.456 lat (msec) : 20=1.36%, 50=28.10%, 100=56.55%, 250=13.99% 00:51:22.456 cpu : usr=43.27%, sys=0.92%, ctx=1220, majf=0, minf=9 00:51:22.456 IO depths : 1=0.7%, 2=1.4%, 4=6.8%, 8=77.6%, 16=13.5%, 32=0.0%, >=64=0.0% 00:51:22.456 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:51:22.456 complete : 0=0.0%, 4=89.3%, 8=6.7%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:51:22.456 issued rwts: total=2345,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:51:22.456 latency : target=0, window=0, percentile=100.00%, depth=16 00:51:22.456 filename2: (groupid=0, jobs=1): err= 0: pid=101746: Mon Jul 22 13:13:39 2024 00:51:22.456 read: IOPS=193, BW=773KiB/s (792kB/s)(7736KiB/10005msec) 00:51:22.456 slat (usec): min=4, max=8021, avg=18.93, stdev=257.51 00:51:22.456 clat (msec): min=5, max=167, avg=82.64, stdev=24.91 00:51:22.456 lat (msec): min=5, max=167, avg=82.66, stdev=24.91 00:51:22.456 clat percentiles (msec): 00:51:22.456 | 1.00th=[ 16], 5.00th=[ 47], 10.00th=[ 51], 20.00th=[ 62], 00:51:22.456 | 30.00th=[ 71], 40.00th=[ 72], 50.00th=[ 84], 60.00th=[ 92], 00:51:22.456 | 70.00th=[ 96], 80.00th=[ 104], 90.00th=[ 111], 95.00th=[ 130], 00:51:22.456 | 99.00th=[ 144], 99.50th=[ 155], 99.90th=[ 167], 99.95th=[ 167], 00:51:22.456 | 99.99th=[ 167] 00:51:22.456 bw ( KiB/s): min= 528, max= 1048, per=3.76%, avg=753.05, stdev=146.30, samples=19 00:51:22.456 iops : min= 132, max= 262, avg=188.21, stdev=36.52, samples=19 00:51:22.456 lat (msec) : 10=0.83%, 20=0.31%, 50=8.69%, 100=68.56%, 250=21.61% 00:51:22.456 cpu : usr=33.43%, sys=0.82%, ctx=920, majf=0, minf=9 00:51:22.456 IO depths : 1=1.9%, 2=4.2%, 4=12.6%, 8=70.1%, 16=11.3%, 32=0.0%, >=64=0.0% 00:51:22.456 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:51:22.456 complete : 0=0.0%, 4=90.9%, 8=4.2%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:51:22.456 issued rwts: total=1934,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:51:22.456 latency : target=0, window=0, percentile=100.00%, depth=16 00:51:22.456 filename2: (groupid=0, jobs=1): err= 0: pid=101747: Mon Jul 22 13:13:39 2024 00:51:22.456 read: IOPS=185, BW=741KiB/s (759kB/s)(7416KiB/10006msec) 00:51:22.456 slat (usec): min=4, max=7654, avg=17.50, stdev=196.92 00:51:22.456 clat (msec): min=36, max=182, avg=86.23, stdev=24.91 00:51:22.456 lat (msec): min=36, max=182, avg=86.25, stdev=24.91 00:51:22.456 clat percentiles (msec): 00:51:22.456 | 1.00th=[ 41], 5.00th=[ 48], 10.00th=[ 56], 20.00th=[ 64], 00:51:22.456 | 30.00th=[ 71], 40.00th=[ 79], 50.00th=[ 87], 60.00th=[ 94], 00:51:22.456 | 70.00th=[ 99], 80.00th=[ 105], 90.00th=[ 117], 95.00th=[ 132], 00:51:22.456 | 99.00th=[ 157], 99.50th=[ 167], 99.90th=[ 184], 99.95th=[ 184], 00:51:22.456 | 99.99th=[ 184] 00:51:22.456 bw ( KiB/s): min= 512, max= 1072, per=3.67%, avg=735.20, stdev=151.33, samples=20 00:51:22.456 iops : min= 128, max= 268, avg=183.75, stdev=37.84, samples=20 00:51:22.456 lat (msec) : 50=6.47%, 100=67.26%, 250=26.27% 00:51:22.456 cpu : usr=40.03%, sys=1.09%, ctx=1225, majf=0, minf=9 00:51:22.456 IO depths : 1=2.6%, 2=5.8%, 4=16.4%, 8=64.7%, 16=10.5%, 32=0.0%, >=64=0.0% 00:51:22.456 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:51:22.456 complete : 0=0.0%, 4=91.4%, 8=3.5%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:51:22.456 issued rwts: total=1854,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:51:22.456 latency : target=0, window=0, percentile=100.00%, depth=16 00:51:22.456 filename2: (groupid=0, jobs=1): err= 0: pid=101748: Mon Jul 22 13:13:39 2024 00:51:22.456 read: IOPS=239, BW=960KiB/s (983kB/s)(9644KiB/10049msec) 00:51:22.456 slat (usec): min=3, max=4017, avg=11.75, stdev=81.70 00:51:22.456 clat (msec): min=2, max=160, avg=66.52, stdev=23.78 00:51:22.456 lat (msec): min=2, max=160, avg=66.53, stdev=23.78 00:51:22.456 clat percentiles (msec): 00:51:22.456 | 1.00th=[ 6], 5.00th=[ 36], 10.00th=[ 41], 20.00th=[ 47], 00:51:22.456 | 30.00th=[ 55], 40.00th=[ 61], 50.00th=[ 65], 60.00th=[ 70], 00:51:22.456 | 70.00th=[ 73], 80.00th=[ 85], 90.00th=[ 97], 95.00th=[ 112], 00:51:22.456 | 99.00th=[ 123], 99.50th=[ 132], 99.90th=[ 161], 99.95th=[ 161], 00:51:22.456 | 99.99th=[ 161] 00:51:22.456 bw ( KiB/s): min= 560, max= 1458, per=4.78%, avg=957.40, stdev=229.94, samples=20 00:51:22.456 iops : min= 140, max= 364, avg=239.25, stdev=57.41, samples=20 00:51:22.456 lat (msec) : 4=0.66%, 10=1.33%, 20=0.66%, 50=24.10%, 100=63.83% 00:51:22.457 lat (msec) : 250=9.42% 00:51:22.457 cpu : usr=43.05%, sys=0.85%, ctx=1161, majf=0, minf=9 00:51:22.457 IO depths : 1=0.8%, 2=1.6%, 4=7.8%, 8=77.2%, 16=12.7%, 32=0.0%, >=64=0.0% 00:51:22.457 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:51:22.457 complete : 0=0.0%, 4=89.4%, 8=6.0%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:51:22.457 issued rwts: total=2411,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:51:22.457 latency : target=0, window=0, percentile=100.00%, depth=16 00:51:22.457 00:51:22.457 Run status group 0 (all jobs): 00:51:22.457 READ: bw=19.5MiB/s (20.5MB/s), 716KiB/s-1008KiB/s (733kB/s-1032kB/s), io=197MiB (206MB), run=10005-10051msec 00:51:22.457 13:13:39 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:51:22.457 13:13:39 -- target/dif.sh@43 -- # local sub 00:51:22.457 13:13:39 -- target/dif.sh@45 -- # for sub in "$@" 00:51:22.457 13:13:39 -- target/dif.sh@46 -- # destroy_subsystem 0 00:51:22.457 13:13:39 -- target/dif.sh@36 -- # local sub_id=0 00:51:22.457 13:13:39 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:51:22.457 13:13:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:51:22.457 13:13:39 -- common/autotest_common.sh@10 -- # set +x 00:51:22.457 13:13:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:51:22.457 13:13:39 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:51:22.457 13:13:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:51:22.457 13:13:39 -- common/autotest_common.sh@10 -- # set +x 00:51:22.457 13:13:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:51:22.457 13:13:40 -- target/dif.sh@45 -- # for sub in "$@" 00:51:22.457 13:13:40 -- target/dif.sh@46 -- # destroy_subsystem 1 00:51:22.457 13:13:40 -- target/dif.sh@36 -- # local sub_id=1 00:51:22.457 13:13:40 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:51:22.457 13:13:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:51:22.457 13:13:40 -- common/autotest_common.sh@10 -- # set +x 00:51:22.457 13:13:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:51:22.457 13:13:40 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:51:22.457 13:13:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:51:22.457 13:13:40 -- common/autotest_common.sh@10 -- # set +x 00:51:22.457 13:13:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:51:22.457 13:13:40 -- target/dif.sh@45 -- # for sub in "$@" 00:51:22.457 13:13:40 -- target/dif.sh@46 -- # destroy_subsystem 2 00:51:22.457 13:13:40 -- target/dif.sh@36 -- # local sub_id=2 00:51:22.457 13:13:40 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:51:22.457 13:13:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:51:22.457 13:13:40 -- common/autotest_common.sh@10 -- # set +x 00:51:22.457 13:13:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:51:22.457 13:13:40 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:51:22.457 13:13:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:51:22.457 13:13:40 -- common/autotest_common.sh@10 -- # set +x 00:51:22.457 13:13:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:51:22.457 13:13:40 -- target/dif.sh@115 -- # NULL_DIF=1 00:51:22.457 13:13:40 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:51:22.457 13:13:40 -- target/dif.sh@115 -- # numjobs=2 00:51:22.457 13:13:40 -- target/dif.sh@115 -- # iodepth=8 00:51:22.457 13:13:40 -- target/dif.sh@115 -- # runtime=5 00:51:22.457 13:13:40 -- target/dif.sh@115 -- # files=1 00:51:22.457 13:13:40 -- target/dif.sh@117 -- # create_subsystems 0 1 00:51:22.457 13:13:40 -- target/dif.sh@28 -- # local sub 00:51:22.457 13:13:40 -- target/dif.sh@30 -- # for sub in "$@" 00:51:22.457 13:13:40 -- target/dif.sh@31 -- # create_subsystem 0 00:51:22.457 13:13:40 -- target/dif.sh@18 -- # local sub_id=0 00:51:22.457 13:13:40 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:51:22.457 13:13:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:51:22.457 13:13:40 -- common/autotest_common.sh@10 -- # set +x 00:51:22.457 bdev_null0 00:51:22.457 13:13:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:51:22.457 13:13:40 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:51:22.457 13:13:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:51:22.457 13:13:40 -- common/autotest_common.sh@10 -- # set +x 00:51:22.457 13:13:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:51:22.457 13:13:40 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:51:22.457 13:13:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:51:22.457 13:13:40 -- common/autotest_common.sh@10 -- # set +x 00:51:22.457 13:13:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:51:22.457 13:13:40 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:51:22.457 13:13:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:51:22.457 13:13:40 -- common/autotest_common.sh@10 -- # set +x 00:51:22.457 [2024-07-22 13:13:40.072277] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:51:22.457 13:13:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:51:22.457 13:13:40 -- target/dif.sh@30 -- # for sub in "$@" 00:51:22.457 13:13:40 -- target/dif.sh@31 -- # create_subsystem 1 00:51:22.457 13:13:40 -- target/dif.sh@18 -- # local sub_id=1 00:51:22.457 13:13:40 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:51:22.457 13:13:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:51:22.457 13:13:40 -- common/autotest_common.sh@10 -- # set +x 00:51:22.457 bdev_null1 00:51:22.457 13:13:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:51:22.457 13:13:40 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:51:22.457 13:13:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:51:22.457 13:13:40 -- common/autotest_common.sh@10 -- # set +x 00:51:22.457 13:13:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:51:22.457 13:13:40 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:51:22.457 13:13:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:51:22.457 13:13:40 -- common/autotest_common.sh@10 -- # set +x 00:51:22.457 13:13:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:51:22.457 13:13:40 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:51:22.457 13:13:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:51:22.457 13:13:40 -- common/autotest_common.sh@10 -- # set +x 00:51:22.457 13:13:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:51:22.457 13:13:40 -- target/dif.sh@118 -- # fio /dev/fd/62 00:51:22.457 13:13:40 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:51:22.457 13:13:40 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:51:22.457 13:13:40 -- nvmf/common.sh@520 -- # config=() 00:51:22.457 13:13:40 -- nvmf/common.sh@520 -- # local subsystem config 00:51:22.457 13:13:40 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:51:22.457 13:13:40 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:51:22.457 13:13:40 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:51:22.457 13:13:40 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:51:22.457 { 00:51:22.457 "params": { 00:51:22.457 "name": "Nvme$subsystem", 00:51:22.457 "trtype": "$TEST_TRANSPORT", 00:51:22.457 "traddr": "$NVMF_FIRST_TARGET_IP", 00:51:22.457 "adrfam": "ipv4", 00:51:22.457 "trsvcid": "$NVMF_PORT", 00:51:22.457 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:51:22.457 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:51:22.457 "hdgst": ${hdgst:-false}, 00:51:22.457 "ddgst": ${ddgst:-false} 00:51:22.457 }, 00:51:22.457 "method": "bdev_nvme_attach_controller" 00:51:22.457 } 00:51:22.457 EOF 00:51:22.457 )") 00:51:22.457 13:13:40 -- target/dif.sh@82 -- # gen_fio_conf 00:51:22.457 13:13:40 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:51:22.457 13:13:40 -- target/dif.sh@54 -- # local file 00:51:22.457 13:13:40 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:51:22.457 13:13:40 -- common/autotest_common.sh@1318 -- # local sanitizers 00:51:22.457 13:13:40 -- target/dif.sh@56 -- # cat 00:51:22.457 13:13:40 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:51:22.457 13:13:40 -- common/autotest_common.sh@1320 -- # shift 00:51:22.457 13:13:40 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:51:22.457 13:13:40 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:51:22.457 13:13:40 -- nvmf/common.sh@542 -- # cat 00:51:22.457 13:13:40 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:51:22.457 13:13:40 -- common/autotest_common.sh@1324 -- # grep libasan 00:51:22.457 13:13:40 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:51:22.457 13:13:40 -- target/dif.sh@72 -- # (( file = 1 )) 00:51:22.457 13:13:40 -- target/dif.sh@72 -- # (( file <= files )) 00:51:22.457 13:13:40 -- target/dif.sh@73 -- # cat 00:51:22.457 13:13:40 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:51:22.457 13:13:40 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:51:22.457 { 00:51:22.457 "params": { 00:51:22.457 "name": "Nvme$subsystem", 00:51:22.457 "trtype": "$TEST_TRANSPORT", 00:51:22.457 "traddr": "$NVMF_FIRST_TARGET_IP", 00:51:22.457 "adrfam": "ipv4", 00:51:22.457 "trsvcid": "$NVMF_PORT", 00:51:22.457 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:51:22.457 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:51:22.457 "hdgst": ${hdgst:-false}, 00:51:22.457 "ddgst": ${ddgst:-false} 00:51:22.457 }, 00:51:22.457 "method": "bdev_nvme_attach_controller" 00:51:22.457 } 00:51:22.457 EOF 00:51:22.457 )") 00:51:22.457 13:13:40 -- nvmf/common.sh@542 -- # cat 00:51:22.457 13:13:40 -- target/dif.sh@72 -- # (( file++ )) 00:51:22.457 13:13:40 -- target/dif.sh@72 -- # (( file <= files )) 00:51:22.457 13:13:40 -- nvmf/common.sh@544 -- # jq . 00:51:22.457 13:13:40 -- nvmf/common.sh@545 -- # IFS=, 00:51:22.457 13:13:40 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:51:22.457 "params": { 00:51:22.457 "name": "Nvme0", 00:51:22.457 "trtype": "tcp", 00:51:22.457 "traddr": "10.0.0.2", 00:51:22.457 "adrfam": "ipv4", 00:51:22.457 "trsvcid": "4420", 00:51:22.457 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:51:22.457 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:51:22.457 "hdgst": false, 00:51:22.457 "ddgst": false 00:51:22.457 }, 00:51:22.458 "method": "bdev_nvme_attach_controller" 00:51:22.458 },{ 00:51:22.458 "params": { 00:51:22.458 "name": "Nvme1", 00:51:22.458 "trtype": "tcp", 00:51:22.458 "traddr": "10.0.0.2", 00:51:22.458 "adrfam": "ipv4", 00:51:22.458 "trsvcid": "4420", 00:51:22.458 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:51:22.458 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:51:22.458 "hdgst": false, 00:51:22.458 "ddgst": false 00:51:22.458 }, 00:51:22.458 "method": "bdev_nvme_attach_controller" 00:51:22.458 }' 00:51:22.458 13:13:40 -- common/autotest_common.sh@1324 -- # asan_lib= 00:51:22.458 13:13:40 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:51:22.458 13:13:40 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:51:22.458 13:13:40 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:51:22.458 13:13:40 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:51:22.458 13:13:40 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:51:22.458 13:13:40 -- common/autotest_common.sh@1324 -- # asan_lib= 00:51:22.458 13:13:40 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:51:22.458 13:13:40 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:51:22.458 13:13:40 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:51:22.458 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:51:22.458 ... 00:51:22.458 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:51:22.458 ... 00:51:22.458 fio-3.35 00:51:22.458 Starting 4 threads 00:51:22.458 [2024-07-22 13:13:40.802796] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:51:22.458 [2024-07-22 13:13:40.802861] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:51:26.639 00:51:26.639 filename0: (groupid=0, jobs=1): err= 0: pid=101880: Mon Jul 22 13:13:45 2024 00:51:26.639 read: IOPS=2034, BW=15.9MiB/s (16.7MB/s)(79.5MiB/5001msec) 00:51:26.639 slat (nsec): min=6899, max=78546, avg=16187.51, stdev=5944.58 00:51:26.639 clat (usec): min=2947, max=5817, avg=3852.49, stdev=165.69 00:51:26.639 lat (usec): min=2959, max=5831, avg=3868.68, stdev=165.98 00:51:26.639 clat percentiles (usec): 00:51:26.639 | 1.00th=[ 3556], 5.00th=[ 3654], 10.00th=[ 3687], 20.00th=[ 3752], 00:51:26.639 | 30.00th=[ 3785], 40.00th=[ 3818], 50.00th=[ 3851], 60.00th=[ 3884], 00:51:26.639 | 70.00th=[ 3916], 80.00th=[ 3949], 90.00th=[ 4015], 95.00th=[ 4113], 00:51:26.639 | 99.00th=[ 4293], 99.50th=[ 4424], 99.90th=[ 5669], 99.95th=[ 5735], 00:51:26.639 | 99.99th=[ 5800] 00:51:26.639 bw ( KiB/s): min=15903, max=16512, per=24.95%, avg=16287.89, stdev=202.62, samples=9 00:51:26.639 iops : min= 1987, max= 2064, avg=2035.89, stdev=25.54, samples=9 00:51:26.639 lat (msec) : 4=86.90%, 10=13.10% 00:51:26.639 cpu : usr=94.44%, sys=4.42%, ctx=24, majf=0, minf=9 00:51:26.639 IO depths : 1=12.4%, 2=25.0%, 4=50.0%, 8=12.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:51:26.639 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:51:26.639 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:51:26.639 issued rwts: total=10176,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:51:26.639 latency : target=0, window=0, percentile=100.00%, depth=8 00:51:26.639 filename0: (groupid=0, jobs=1): err= 0: pid=101881: Mon Jul 22 13:13:45 2024 00:51:26.639 read: IOPS=2055, BW=16.1MiB/s (16.8MB/s)(80.3MiB/5001msec) 00:51:26.639 slat (nsec): min=6317, max=73746, avg=9155.51, stdev=4338.67 00:51:26.639 clat (usec): min=703, max=5842, avg=3848.73, stdev=283.80 00:51:26.639 lat (usec): min=710, max=5853, avg=3857.88, stdev=283.78 00:51:26.639 clat percentiles (usec): 00:51:26.639 | 1.00th=[ 2180], 5.00th=[ 3654], 10.00th=[ 3720], 20.00th=[ 3785], 00:51:26.639 | 30.00th=[ 3818], 40.00th=[ 3818], 50.00th=[ 3851], 60.00th=[ 3884], 00:51:26.639 | 70.00th=[ 3916], 80.00th=[ 3982], 90.00th=[ 4047], 95.00th=[ 4113], 00:51:26.639 | 99.00th=[ 4293], 99.50th=[ 4359], 99.90th=[ 4752], 99.95th=[ 5800], 00:51:26.639 | 99.99th=[ 5866] 00:51:26.639 bw ( KiB/s): min=16256, max=16672, per=25.18%, avg=16437.33, stdev=120.53, samples=9 00:51:26.639 iops : min= 2032, max= 2084, avg=2054.67, stdev=15.07, samples=9 00:51:26.639 lat (usec) : 750=0.05%, 1000=0.01% 00:51:26.639 lat (msec) : 2=0.54%, 4=84.00%, 10=15.40% 00:51:26.639 cpu : usr=94.44%, sys=4.42%, ctx=24, majf=0, minf=0 00:51:26.639 IO depths : 1=9.2%, 2=21.1%, 4=53.5%, 8=16.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:51:26.639 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:51:26.639 complete : 0=0.0%, 4=89.6%, 8=10.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:51:26.639 issued rwts: total=10281,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:51:26.639 latency : target=0, window=0, percentile=100.00%, depth=8 00:51:26.639 filename1: (groupid=0, jobs=1): err= 0: pid=101882: Mon Jul 22 13:13:45 2024 00:51:26.639 read: IOPS=2035, BW=15.9MiB/s (16.7MB/s)(79.6MiB/5002msec) 00:51:26.639 slat (nsec): min=6889, max=78511, avg=14823.27, stdev=6351.88 00:51:26.639 clat (usec): min=2822, max=5839, avg=3861.42, stdev=159.07 00:51:26.639 lat (usec): min=2833, max=5851, avg=3876.24, stdev=159.12 00:51:26.639 clat percentiles (usec): 00:51:26.639 | 1.00th=[ 3589], 5.00th=[ 3654], 10.00th=[ 3687], 20.00th=[ 3752], 00:51:26.639 | 30.00th=[ 3785], 40.00th=[ 3818], 50.00th=[ 3851], 60.00th=[ 3884], 00:51:26.639 | 70.00th=[ 3916], 80.00th=[ 3982], 90.00th=[ 4047], 95.00th=[ 4113], 00:51:26.639 | 99.00th=[ 4293], 99.50th=[ 4424], 99.90th=[ 4883], 99.95th=[ 5800], 00:51:26.639 | 99.99th=[ 5800] 00:51:26.639 bw ( KiB/s): min=16000, max=16512, per=24.97%, avg=16298.67, stdev=181.02, samples=9 00:51:26.639 iops : min= 2000, max= 2064, avg=2037.33, stdev=22.63, samples=9 00:51:26.639 lat (msec) : 4=85.61%, 10=14.39% 00:51:26.639 cpu : usr=93.84%, sys=5.02%, ctx=7, majf=0, minf=9 00:51:26.639 IO depths : 1=12.3%, 2=25.0%, 4=50.0%, 8=12.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:51:26.639 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:51:26.639 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:51:26.640 issued rwts: total=10184,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:51:26.640 latency : target=0, window=0, percentile=100.00%, depth=8 00:51:26.640 filename1: (groupid=0, jobs=1): err= 0: pid=101883: Mon Jul 22 13:13:45 2024 00:51:26.640 read: IOPS=2034, BW=15.9MiB/s (16.7MB/s)(79.5MiB/5002msec) 00:51:26.640 slat (nsec): min=6690, max=79443, avg=16078.00, stdev=6205.03 00:51:26.640 clat (usec): min=1889, max=6705, avg=3851.99, stdev=179.84 00:51:26.640 lat (usec): min=1900, max=6731, avg=3868.07, stdev=180.32 00:51:26.640 clat percentiles (usec): 00:51:26.640 | 1.00th=[ 3556], 5.00th=[ 3654], 10.00th=[ 3687], 20.00th=[ 3752], 00:51:26.640 | 30.00th=[ 3785], 40.00th=[ 3818], 50.00th=[ 3818], 60.00th=[ 3851], 00:51:26.640 | 70.00th=[ 3916], 80.00th=[ 3949], 90.00th=[ 4015], 95.00th=[ 4113], 00:51:26.640 | 99.00th=[ 4293], 99.50th=[ 4424], 99.90th=[ 5800], 99.95th=[ 6652], 00:51:26.640 | 99.99th=[ 6718] 00:51:26.640 bw ( KiB/s): min=15903, max=16512, per=24.95%, avg=16287.89, stdev=202.62, samples=9 00:51:26.640 iops : min= 1987, max= 2064, avg=2035.89, stdev=25.54, samples=9 00:51:26.640 lat (msec) : 2=0.01%, 4=87.01%, 10=12.98% 00:51:26.640 cpu : usr=94.00%, sys=4.88%, ctx=6, majf=0, minf=9 00:51:26.640 IO depths : 1=12.4%, 2=25.0%, 4=50.0%, 8=12.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:51:26.640 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:51:26.640 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:51:26.640 issued rwts: total=10176,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:51:26.640 latency : target=0, window=0, percentile=100.00%, depth=8 00:51:26.640 00:51:26.640 Run status group 0 (all jobs): 00:51:26.640 READ: bw=63.8MiB/s (66.8MB/s), 15.9MiB/s-16.1MiB/s (16.7MB/s-16.8MB/s), io=319MiB (334MB), run=5001-5002msec 00:51:26.908 13:13:46 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:51:26.908 13:13:46 -- target/dif.sh@43 -- # local sub 00:51:26.908 13:13:46 -- target/dif.sh@45 -- # for sub in "$@" 00:51:26.908 13:13:46 -- target/dif.sh@46 -- # destroy_subsystem 0 00:51:26.908 13:13:46 -- target/dif.sh@36 -- # local sub_id=0 00:51:26.908 13:13:46 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:51:26.908 13:13:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:51:26.908 13:13:46 -- common/autotest_common.sh@10 -- # set +x 00:51:26.908 13:13:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:51:26.908 13:13:46 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:51:26.908 13:13:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:51:26.908 13:13:46 -- common/autotest_common.sh@10 -- # set +x 00:51:26.908 13:13:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:51:26.908 13:13:46 -- target/dif.sh@45 -- # for sub in "$@" 00:51:26.908 13:13:46 -- target/dif.sh@46 -- # destroy_subsystem 1 00:51:26.908 13:13:46 -- target/dif.sh@36 -- # local sub_id=1 00:51:26.908 13:13:46 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:51:26.908 13:13:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:51:26.908 13:13:46 -- common/autotest_common.sh@10 -- # set +x 00:51:26.908 13:13:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:51:26.908 13:13:46 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:51:26.908 13:13:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:51:26.908 13:13:46 -- common/autotest_common.sh@10 -- # set +x 00:51:26.908 13:13:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:51:26.908 00:51:26.908 real 0m23.539s 00:51:26.908 user 2m7.380s 00:51:26.908 sys 0m4.749s 00:51:26.908 13:13:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:51:26.908 13:13:46 -- common/autotest_common.sh@10 -- # set +x 00:51:26.908 ************************************ 00:51:26.908 END TEST fio_dif_rand_params 00:51:26.908 ************************************ 00:51:26.908 13:13:46 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:51:26.908 13:13:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:51:26.908 13:13:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:51:26.908 13:13:46 -- common/autotest_common.sh@10 -- # set +x 00:51:26.908 ************************************ 00:51:26.908 START TEST fio_dif_digest 00:51:26.908 ************************************ 00:51:26.908 13:13:46 -- common/autotest_common.sh@1104 -- # fio_dif_digest 00:51:26.908 13:13:46 -- target/dif.sh@123 -- # local NULL_DIF 00:51:26.908 13:13:46 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:51:26.908 13:13:46 -- target/dif.sh@125 -- # local hdgst ddgst 00:51:26.908 13:13:46 -- target/dif.sh@127 -- # NULL_DIF=3 00:51:26.908 13:13:46 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:51:26.908 13:13:46 -- target/dif.sh@127 -- # numjobs=3 00:51:26.908 13:13:46 -- target/dif.sh@127 -- # iodepth=3 00:51:26.908 13:13:46 -- target/dif.sh@127 -- # runtime=10 00:51:26.908 13:13:46 -- target/dif.sh@128 -- # hdgst=true 00:51:26.908 13:13:46 -- target/dif.sh@128 -- # ddgst=true 00:51:26.908 13:13:46 -- target/dif.sh@130 -- # create_subsystems 0 00:51:26.909 13:13:46 -- target/dif.sh@28 -- # local sub 00:51:26.909 13:13:46 -- target/dif.sh@30 -- # for sub in "$@" 00:51:26.909 13:13:46 -- target/dif.sh@31 -- # create_subsystem 0 00:51:26.909 13:13:46 -- target/dif.sh@18 -- # local sub_id=0 00:51:26.909 13:13:46 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:51:26.909 13:13:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:51:26.909 13:13:46 -- common/autotest_common.sh@10 -- # set +x 00:51:26.909 bdev_null0 00:51:26.909 13:13:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:51:26.909 13:13:46 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:51:26.909 13:13:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:51:26.909 13:13:46 -- common/autotest_common.sh@10 -- # set +x 00:51:26.909 13:13:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:51:26.909 13:13:46 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:51:26.909 13:13:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:51:26.909 13:13:46 -- common/autotest_common.sh@10 -- # set +x 00:51:26.909 13:13:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:51:26.909 13:13:46 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:51:26.909 13:13:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:51:26.909 13:13:46 -- common/autotest_common.sh@10 -- # set +x 00:51:26.909 [2024-07-22 13:13:46.271864] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:51:26.909 13:13:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:51:26.909 13:13:46 -- target/dif.sh@131 -- # fio /dev/fd/62 00:51:26.909 13:13:46 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:51:26.909 13:13:46 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:51:26.909 13:13:46 -- nvmf/common.sh@520 -- # config=() 00:51:26.909 13:13:46 -- nvmf/common.sh@520 -- # local subsystem config 00:51:26.909 13:13:46 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:51:26.909 13:13:46 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:51:26.909 13:13:46 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:51:26.909 { 00:51:26.909 "params": { 00:51:26.909 "name": "Nvme$subsystem", 00:51:26.909 "trtype": "$TEST_TRANSPORT", 00:51:26.909 "traddr": "$NVMF_FIRST_TARGET_IP", 00:51:26.909 "adrfam": "ipv4", 00:51:26.909 "trsvcid": "$NVMF_PORT", 00:51:26.909 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:51:26.909 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:51:26.909 "hdgst": ${hdgst:-false}, 00:51:26.909 "ddgst": ${ddgst:-false} 00:51:26.909 }, 00:51:26.909 "method": "bdev_nvme_attach_controller" 00:51:26.909 } 00:51:26.909 EOF 00:51:26.909 )") 00:51:26.909 13:13:46 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:51:26.909 13:13:46 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:51:26.909 13:13:46 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:51:26.909 13:13:46 -- common/autotest_common.sh@1318 -- # local sanitizers 00:51:26.909 13:13:46 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:51:26.909 13:13:46 -- common/autotest_common.sh@1320 -- # shift 00:51:26.909 13:13:46 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:51:26.909 13:13:46 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:51:26.909 13:13:46 -- nvmf/common.sh@542 -- # cat 00:51:26.909 13:13:46 -- target/dif.sh@82 -- # gen_fio_conf 00:51:26.909 13:13:46 -- target/dif.sh@54 -- # local file 00:51:26.909 13:13:46 -- target/dif.sh@56 -- # cat 00:51:26.909 13:13:46 -- common/autotest_common.sh@1324 -- # grep libasan 00:51:26.909 13:13:46 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:51:26.909 13:13:46 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:51:26.909 13:13:46 -- nvmf/common.sh@544 -- # jq . 00:51:26.909 13:13:46 -- target/dif.sh@72 -- # (( file = 1 )) 00:51:26.909 13:13:46 -- target/dif.sh@72 -- # (( file <= files )) 00:51:26.909 13:13:46 -- nvmf/common.sh@545 -- # IFS=, 00:51:26.909 13:13:46 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:51:26.909 "params": { 00:51:26.909 "name": "Nvme0", 00:51:26.909 "trtype": "tcp", 00:51:26.909 "traddr": "10.0.0.2", 00:51:26.909 "adrfam": "ipv4", 00:51:26.909 "trsvcid": "4420", 00:51:26.909 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:51:26.909 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:51:26.909 "hdgst": true, 00:51:26.909 "ddgst": true 00:51:26.909 }, 00:51:26.909 "method": "bdev_nvme_attach_controller" 00:51:26.909 }' 00:51:26.909 13:13:46 -- common/autotest_common.sh@1324 -- # asan_lib= 00:51:26.909 13:13:46 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:51:26.909 13:13:46 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:51:26.909 13:13:46 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:51:26.909 13:13:46 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:51:26.909 13:13:46 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:51:27.181 13:13:46 -- common/autotest_common.sh@1324 -- # asan_lib= 00:51:27.181 13:13:46 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:51:27.181 13:13:46 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:51:27.181 13:13:46 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:51:27.181 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:51:27.181 ... 00:51:27.181 fio-3.35 00:51:27.181 Starting 3 threads 00:51:27.746 [2024-07-22 13:13:46.863639] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:51:27.747 [2024-07-22 13:13:46.863734] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:51:37.712 00:51:37.712 filename0: (groupid=0, jobs=1): err= 0: pid=101989: Mon Jul 22 13:13:56 2024 00:51:37.712 read: IOPS=188, BW=23.6MiB/s (24.7MB/s)(236MiB/10005msec) 00:51:37.712 slat (nsec): min=6815, max=51773, avg=12279.88, stdev=3837.63 00:51:37.712 clat (usec): min=7550, max=19525, avg=15872.26, stdev=1419.08 00:51:37.712 lat (usec): min=7560, max=19538, avg=15884.54, stdev=1419.24 00:51:37.712 clat percentiles (usec): 00:51:37.712 | 1.00th=[ 9896], 5.00th=[14222], 10.00th=[14746], 20.00th=[15270], 00:51:37.712 | 30.00th=[15533], 40.00th=[15795], 50.00th=[15926], 60.00th=[16188], 00:51:37.712 | 70.00th=[16450], 80.00th=[16909], 90.00th=[17171], 95.00th=[17695], 00:51:37.712 | 99.00th=[18482], 99.50th=[18744], 99.90th=[19530], 99.95th=[19530], 00:51:37.712 | 99.99th=[19530] 00:51:37.712 bw ( KiB/s): min=23040, max=26880, per=27.78%, avg=24188.47, stdev=914.58, samples=19 00:51:37.712 iops : min= 180, max= 210, avg=188.89, stdev= 7.17, samples=19 00:51:37.712 lat (msec) : 10=1.16%, 20=98.84% 00:51:37.712 cpu : usr=93.95%, sys=4.85%, ctx=15, majf=0, minf=9 00:51:37.712 IO depths : 1=4.0%, 2=96.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:51:37.712 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:51:37.712 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:51:37.712 issued rwts: total=1889,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:51:37.712 latency : target=0, window=0, percentile=100.00%, depth=3 00:51:37.712 filename0: (groupid=0, jobs=1): err= 0: pid=101990: Mon Jul 22 13:13:56 2024 00:51:37.712 read: IOPS=231, BW=29.0MiB/s (30.4MB/s)(290MiB/10005msec) 00:51:37.712 slat (usec): min=6, max=114, avg=12.24, stdev= 4.85 00:51:37.712 clat (usec): min=6759, max=16875, avg=12925.74, stdev=1394.44 00:51:37.712 lat (usec): min=6774, max=16883, avg=12937.97, stdev=1394.39 00:51:37.712 clat percentiles (usec): 00:51:37.712 | 1.00th=[ 7767], 5.00th=[10945], 10.00th=[11600], 20.00th=[12125], 00:51:37.712 | 30.00th=[12518], 40.00th=[12780], 50.00th=[13042], 60.00th=[13304], 00:51:37.712 | 70.00th=[13566], 80.00th=[13960], 90.00th=[14484], 95.00th=[14877], 00:51:37.712 | 99.00th=[15664], 99.50th=[15926], 99.90th=[16712], 99.95th=[16909], 00:51:37.712 | 99.99th=[16909] 00:51:37.712 bw ( KiB/s): min=27703, max=33280, per=34.06%, avg=29656.16, stdev=1287.83, samples=19 00:51:37.712 iops : min= 216, max= 260, avg=231.63, stdev=10.08, samples=19 00:51:37.712 lat (msec) : 10=3.84%, 20=96.16% 00:51:37.712 cpu : usr=93.56%, sys=5.11%, ctx=40, majf=0, minf=0 00:51:37.712 IO depths : 1=1.9%, 2=98.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:51:37.712 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:51:37.712 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:51:37.712 issued rwts: total=2319,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:51:37.712 latency : target=0, window=0, percentile=100.00%, depth=3 00:51:37.712 filename0: (groupid=0, jobs=1): err= 0: pid=101991: Mon Jul 22 13:13:56 2024 00:51:37.712 read: IOPS=259, BW=32.5MiB/s (34.0MB/s)(325MiB/10006msec) 00:51:37.712 slat (nsec): min=6902, max=50880, avg=12560.74, stdev=4268.81 00:51:37.712 clat (usec): min=8342, max=53356, avg=11538.09, stdev=3478.35 00:51:37.712 lat (usec): min=8352, max=53367, avg=11550.65, stdev=3478.41 00:51:37.712 clat percentiles (usec): 00:51:37.712 | 1.00th=[ 9372], 5.00th=[ 9896], 10.00th=[10290], 20.00th=[10552], 00:51:37.712 | 30.00th=[10814], 40.00th=[11076], 50.00th=[11207], 60.00th=[11469], 00:51:37.712 | 70.00th=[11731], 80.00th=[11863], 90.00th=[12256], 95.00th=[12780], 00:51:37.712 | 99.00th=[13960], 99.50th=[51643], 99.90th=[53216], 99.95th=[53216], 00:51:37.712 | 99.99th=[53216] 00:51:37.712 bw ( KiB/s): min=27136, max=35584, per=38.13%, avg=33200.95, stdev=1965.79, samples=19 00:51:37.712 iops : min= 212, max= 278, avg=259.26, stdev=15.32, samples=19 00:51:37.712 lat (msec) : 10=5.85%, 20=93.46%, 100=0.69% 00:51:37.712 cpu : usr=92.37%, sys=6.20%, ctx=14, majf=0, minf=9 00:51:37.712 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:51:37.712 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:51:37.712 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:51:37.712 issued rwts: total=2598,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:51:37.712 latency : target=0, window=0, percentile=100.00%, depth=3 00:51:37.712 00:51:37.712 Run status group 0 (all jobs): 00:51:37.712 READ: bw=85.0MiB/s (89.2MB/s), 23.6MiB/s-32.5MiB/s (24.7MB/s-34.0MB/s), io=851MiB (892MB), run=10005-10006msec 00:51:37.971 13:13:57 -- target/dif.sh@132 -- # destroy_subsystems 0 00:51:37.971 13:13:57 -- target/dif.sh@43 -- # local sub 00:51:37.971 13:13:57 -- target/dif.sh@45 -- # for sub in "$@" 00:51:37.971 13:13:57 -- target/dif.sh@46 -- # destroy_subsystem 0 00:51:37.971 13:13:57 -- target/dif.sh@36 -- # local sub_id=0 00:51:37.971 13:13:57 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:51:37.971 13:13:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:51:37.971 13:13:57 -- common/autotest_common.sh@10 -- # set +x 00:51:37.971 13:13:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:51:37.971 13:13:57 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:51:37.971 13:13:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:51:37.971 13:13:57 -- common/autotest_common.sh@10 -- # set +x 00:51:37.971 13:13:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:51:37.971 00:51:37.971 real 0m10.975s 00:51:37.971 user 0m28.654s 00:51:37.971 sys 0m1.870s 00:51:37.971 13:13:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:51:37.971 ************************************ 00:51:37.971 13:13:57 -- common/autotest_common.sh@10 -- # set +x 00:51:37.971 END TEST fio_dif_digest 00:51:37.971 ************************************ 00:51:37.971 13:13:57 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:51:37.971 13:13:57 -- target/dif.sh@147 -- # nvmftestfini 00:51:37.971 13:13:57 -- nvmf/common.sh@476 -- # nvmfcleanup 00:51:37.971 13:13:57 -- nvmf/common.sh@116 -- # sync 00:51:37.971 13:13:57 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:51:37.971 13:13:57 -- nvmf/common.sh@119 -- # set +e 00:51:37.971 13:13:57 -- nvmf/common.sh@120 -- # for i in {1..20} 00:51:37.971 13:13:57 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:51:37.971 rmmod nvme_tcp 00:51:37.971 rmmod nvme_fabrics 00:51:37.971 rmmod nvme_keyring 00:51:37.971 13:13:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:51:37.971 13:13:57 -- nvmf/common.sh@123 -- # set -e 00:51:37.971 13:13:57 -- nvmf/common.sh@124 -- # return 0 00:51:37.971 13:13:57 -- nvmf/common.sh@477 -- # '[' -n 101228 ']' 00:51:37.971 13:13:57 -- nvmf/common.sh@478 -- # killprocess 101228 00:51:37.971 13:13:57 -- common/autotest_common.sh@926 -- # '[' -z 101228 ']' 00:51:37.971 13:13:57 -- common/autotest_common.sh@930 -- # kill -0 101228 00:51:37.971 13:13:57 -- common/autotest_common.sh@931 -- # uname 00:51:37.971 13:13:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:51:37.971 13:13:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 101228 00:51:37.971 13:13:57 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:51:37.971 killing process with pid 101228 00:51:37.971 13:13:57 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:51:37.971 13:13:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 101228' 00:51:37.971 13:13:57 -- common/autotest_common.sh@945 -- # kill 101228 00:51:37.971 13:13:57 -- common/autotest_common.sh@950 -- # wait 101228 00:51:38.229 13:13:57 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:51:38.229 13:13:57 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:51:38.487 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:51:38.745 Waiting for block devices as requested 00:51:38.745 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:51:38.745 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:51:38.745 13:13:58 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:51:38.745 13:13:58 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:51:38.745 13:13:58 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:51:38.745 13:13:58 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:51:38.745 13:13:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:51:38.745 13:13:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:51:38.745 13:13:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:51:39.005 13:13:58 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:51:39.005 00:51:39.005 real 0m59.736s 00:51:39.005 user 3m52.945s 00:51:39.005 sys 0m13.990s 00:51:39.005 13:13:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:51:39.005 13:13:58 -- common/autotest_common.sh@10 -- # set +x 00:51:39.005 ************************************ 00:51:39.005 END TEST nvmf_dif 00:51:39.005 ************************************ 00:51:39.005 13:13:58 -- spdk/autotest.sh@301 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:51:39.005 13:13:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:51:39.005 13:13:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:51:39.005 13:13:58 -- common/autotest_common.sh@10 -- # set +x 00:51:39.005 ************************************ 00:51:39.005 START TEST nvmf_abort_qd_sizes 00:51:39.005 ************************************ 00:51:39.005 13:13:58 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:51:39.005 * Looking for test storage... 00:51:39.005 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:51:39.005 13:13:58 -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:51:39.005 13:13:58 -- nvmf/common.sh@7 -- # uname -s 00:51:39.005 13:13:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:51:39.005 13:13:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:51:39.005 13:13:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:51:39.005 13:13:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:51:39.005 13:13:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:51:39.005 13:13:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:51:39.005 13:13:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:51:39.005 13:13:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:51:39.005 13:13:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:51:39.005 13:13:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:51:39.005 13:13:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:864ae4a3-cd96-439b-8ef2-6dab4d992115 00:51:39.005 13:13:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=864ae4a3-cd96-439b-8ef2-6dab4d992115 00:51:39.005 13:13:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:51:39.005 13:13:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:51:39.005 13:13:58 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:51:39.005 13:13:58 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:51:39.005 13:13:58 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:51:39.005 13:13:58 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:51:39.005 13:13:58 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:51:39.005 13:13:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:51:39.005 13:13:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:51:39.005 13:13:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:51:39.005 13:13:58 -- paths/export.sh@5 -- # export PATH 00:51:39.005 13:13:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:51:39.005 13:13:58 -- nvmf/common.sh@46 -- # : 0 00:51:39.005 13:13:58 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:51:39.005 13:13:58 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:51:39.005 13:13:58 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:51:39.005 13:13:58 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:51:39.005 13:13:58 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:51:39.005 13:13:58 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:51:39.005 13:13:58 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:51:39.005 13:13:58 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:51:39.005 13:13:58 -- target/abort_qd_sizes.sh@73 -- # nvmftestinit 00:51:39.005 13:13:58 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:51:39.005 13:13:58 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:51:39.005 13:13:58 -- nvmf/common.sh@436 -- # prepare_net_devs 00:51:39.005 13:13:58 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:51:39.005 13:13:58 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:51:39.005 13:13:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:51:39.005 13:13:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:51:39.005 13:13:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:51:39.005 13:13:58 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:51:39.005 13:13:58 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:51:39.005 13:13:58 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:51:39.005 13:13:58 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:51:39.005 13:13:58 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:51:39.005 13:13:58 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:51:39.005 13:13:58 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:51:39.005 13:13:58 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:51:39.005 13:13:58 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:51:39.005 13:13:58 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:51:39.005 13:13:58 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:51:39.005 13:13:58 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:51:39.005 13:13:58 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:51:39.005 13:13:58 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:51:39.005 13:13:58 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:51:39.005 13:13:58 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:51:39.005 13:13:58 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:51:39.005 13:13:58 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:51:39.005 13:13:58 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:51:39.005 13:13:58 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:51:39.005 Cannot find device "nvmf_tgt_br" 00:51:39.005 13:13:58 -- nvmf/common.sh@154 -- # true 00:51:39.005 13:13:58 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:51:39.005 Cannot find device "nvmf_tgt_br2" 00:51:39.005 13:13:58 -- nvmf/common.sh@155 -- # true 00:51:39.005 13:13:58 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:51:39.005 13:13:58 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:51:39.005 Cannot find device "nvmf_tgt_br" 00:51:39.005 13:13:58 -- nvmf/common.sh@157 -- # true 00:51:39.005 13:13:58 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:51:39.005 Cannot find device "nvmf_tgt_br2" 00:51:39.005 13:13:58 -- nvmf/common.sh@158 -- # true 00:51:39.005 13:13:58 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:51:39.264 13:13:58 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:51:39.264 13:13:58 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:51:39.264 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:51:39.264 13:13:58 -- nvmf/common.sh@161 -- # true 00:51:39.265 13:13:58 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:51:39.265 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:51:39.265 13:13:58 -- nvmf/common.sh@162 -- # true 00:51:39.265 13:13:58 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:51:39.265 13:13:58 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:51:39.265 13:13:58 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:51:39.265 13:13:58 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:51:39.265 13:13:58 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:51:39.265 13:13:58 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:51:39.265 13:13:58 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:51:39.265 13:13:58 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:51:39.265 13:13:58 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:51:39.265 13:13:58 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:51:39.265 13:13:58 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:51:39.265 13:13:58 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:51:39.265 13:13:58 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:51:39.265 13:13:58 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:51:39.265 13:13:58 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:51:39.265 13:13:58 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:51:39.265 13:13:58 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:51:39.265 13:13:58 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:51:39.265 13:13:58 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:51:39.265 13:13:58 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:51:39.265 13:13:58 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:51:39.265 13:13:58 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:51:39.265 13:13:58 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:51:39.265 13:13:58 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:51:39.265 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:51:39.265 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:51:39.265 00:51:39.265 --- 10.0.0.2 ping statistics --- 00:51:39.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:51:39.265 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:51:39.265 13:13:58 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:51:39.265 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:51:39.265 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:51:39.265 00:51:39.265 --- 10.0.0.3 ping statistics --- 00:51:39.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:51:39.265 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:51:39.265 13:13:58 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:51:39.265 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:51:39.265 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:51:39.265 00:51:39.265 --- 10.0.0.1 ping statistics --- 00:51:39.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:51:39.265 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:51:39.265 13:13:58 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:51:39.265 13:13:58 -- nvmf/common.sh@421 -- # return 0 00:51:39.265 13:13:58 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:51:39.265 13:13:58 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:51:40.199 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:51:40.199 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:51:40.199 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:51:40.199 13:13:59 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:51:40.199 13:13:59 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:51:40.199 13:13:59 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:51:40.199 13:13:59 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:51:40.199 13:13:59 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:51:40.199 13:13:59 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:51:40.199 13:13:59 -- target/abort_qd_sizes.sh@74 -- # nvmfappstart -m 0xf 00:51:40.199 13:13:59 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:51:40.199 13:13:59 -- common/autotest_common.sh@712 -- # xtrace_disable 00:51:40.199 13:13:59 -- common/autotest_common.sh@10 -- # set +x 00:51:40.199 13:13:59 -- nvmf/common.sh@469 -- # nvmfpid=102582 00:51:40.199 13:13:59 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:51:40.199 13:13:59 -- nvmf/common.sh@470 -- # waitforlisten 102582 00:51:40.199 13:13:59 -- common/autotest_common.sh@819 -- # '[' -z 102582 ']' 00:51:40.199 13:13:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:51:40.199 13:13:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:51:40.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:51:40.199 13:13:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:51:40.199 13:13:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:51:40.199 13:13:59 -- common/autotest_common.sh@10 -- # set +x 00:51:40.199 [2024-07-22 13:13:59.594323] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:51:40.199 [2024-07-22 13:13:59.594411] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:51:40.457 [2024-07-22 13:13:59.737394] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:51:40.457 [2024-07-22 13:13:59.816730] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:51:40.457 [2024-07-22 13:13:59.817162] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:51:40.457 [2024-07-22 13:13:59.817301] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:51:40.457 [2024-07-22 13:13:59.817451] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:51:40.457 [2024-07-22 13:13:59.817671] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:51:40.457 [2024-07-22 13:13:59.817792] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:51:40.457 [2024-07-22 13:13:59.818009] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:51:40.457 [2024-07-22 13:13:59.818035] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:51:41.496 13:14:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:51:41.496 13:14:00 -- common/autotest_common.sh@852 -- # return 0 00:51:41.496 13:14:00 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:51:41.496 13:14:00 -- common/autotest_common.sh@718 -- # xtrace_disable 00:51:41.496 13:14:00 -- common/autotest_common.sh@10 -- # set +x 00:51:41.496 13:14:00 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:51:41.496 13:14:00 -- target/abort_qd_sizes.sh@76 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:51:41.496 13:14:00 -- target/abort_qd_sizes.sh@78 -- # mapfile -t nvmes 00:51:41.496 13:14:00 -- target/abort_qd_sizes.sh@78 -- # nvme_in_userspace 00:51:41.496 13:14:00 -- scripts/common.sh@311 -- # local bdf bdfs 00:51:41.496 13:14:00 -- scripts/common.sh@312 -- # local nvmes 00:51:41.497 13:14:00 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:51:41.497 13:14:00 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:51:41.497 13:14:00 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:51:41.497 13:14:00 -- scripts/common.sh@297 -- # local bdf= 00:51:41.497 13:14:00 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:51:41.497 13:14:00 -- scripts/common.sh@232 -- # local class 00:51:41.497 13:14:00 -- scripts/common.sh@233 -- # local subclass 00:51:41.497 13:14:00 -- scripts/common.sh@234 -- # local progif 00:51:41.497 13:14:00 -- scripts/common.sh@235 -- # printf %02x 1 00:51:41.497 13:14:00 -- scripts/common.sh@235 -- # class=01 00:51:41.497 13:14:00 -- scripts/common.sh@236 -- # printf %02x 8 00:51:41.497 13:14:00 -- scripts/common.sh@236 -- # subclass=08 00:51:41.497 13:14:00 -- scripts/common.sh@237 -- # printf %02x 2 00:51:41.497 13:14:00 -- scripts/common.sh@237 -- # progif=02 00:51:41.497 13:14:00 -- scripts/common.sh@239 -- # hash lspci 00:51:41.497 13:14:00 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:51:41.497 13:14:00 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:51:41.497 13:14:00 -- scripts/common.sh@242 -- # grep -i -- -p02 00:51:41.497 13:14:00 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:51:41.497 13:14:00 -- scripts/common.sh@244 -- # tr -d '"' 00:51:41.497 13:14:00 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:51:41.497 13:14:00 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:51:41.497 13:14:00 -- scripts/common.sh@15 -- # local i 00:51:41.497 13:14:00 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:51:41.497 13:14:00 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:51:41.497 13:14:00 -- scripts/common.sh@24 -- # return 0 00:51:41.497 13:14:00 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:51:41.497 13:14:00 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:51:41.497 13:14:00 -- scripts/common.sh@300 -- # pci_can_use 0000:00:07.0 00:51:41.497 13:14:00 -- scripts/common.sh@15 -- # local i 00:51:41.497 13:14:00 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:51:41.497 13:14:00 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:51:41.497 13:14:00 -- scripts/common.sh@24 -- # return 0 00:51:41.497 13:14:00 -- scripts/common.sh@301 -- # echo 0000:00:07.0 00:51:41.497 13:14:00 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:51:41.497 13:14:00 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:51:41.497 13:14:00 -- scripts/common.sh@322 -- # uname -s 00:51:41.497 13:14:00 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:51:41.497 13:14:00 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:51:41.497 13:14:00 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:51:41.497 13:14:00 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:07.0 ]] 00:51:41.497 13:14:00 -- scripts/common.sh@322 -- # uname -s 00:51:41.497 13:14:00 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:51:41.497 13:14:00 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:51:41.497 13:14:00 -- scripts/common.sh@327 -- # (( 2 )) 00:51:41.497 13:14:00 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:51:41.497 13:14:00 -- target/abort_qd_sizes.sh@79 -- # (( 2 > 0 )) 00:51:41.497 13:14:00 -- target/abort_qd_sizes.sh@81 -- # nvme=0000:00:06.0 00:51:41.497 13:14:00 -- target/abort_qd_sizes.sh@83 -- # run_test spdk_target_abort spdk_target 00:51:41.497 13:14:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:51:41.497 13:14:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:51:41.497 13:14:00 -- common/autotest_common.sh@10 -- # set +x 00:51:41.497 ************************************ 00:51:41.497 START TEST spdk_target_abort 00:51:41.497 ************************************ 00:51:41.497 13:14:00 -- common/autotest_common.sh@1104 -- # spdk_target 00:51:41.497 13:14:00 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:51:41.497 13:14:00 -- target/abort_qd_sizes.sh@44 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:51:41.497 13:14:00 -- target/abort_qd_sizes.sh@46 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:06.0 -b spdk_target 00:51:41.497 13:14:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:51:41.497 13:14:00 -- common/autotest_common.sh@10 -- # set +x 00:51:41.497 spdk_targetn1 00:51:41.497 13:14:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:51:41.497 13:14:00 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:51:41.497 13:14:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:51:41.497 13:14:00 -- common/autotest_common.sh@10 -- # set +x 00:51:41.497 [2024-07-22 13:14:00.755469] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:51:41.497 13:14:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:51:41.497 13:14:00 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:spdk_target -a -s SPDKISFASTANDAWESOME 00:51:41.497 13:14:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:51:41.497 13:14:00 -- common/autotest_common.sh@10 -- # set +x 00:51:41.497 13:14:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:51:41.497 13:14:00 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:spdk_target spdk_targetn1 00:51:41.497 13:14:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:51:41.497 13:14:00 -- common/autotest_common.sh@10 -- # set +x 00:51:41.497 13:14:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:51:41.497 13:14:00 -- target/abort_qd_sizes.sh@51 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:spdk_target -t tcp -a 10.0.0.2 -s 4420 00:51:41.497 13:14:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:51:41.497 13:14:00 -- common/autotest_common.sh@10 -- # set +x 00:51:41.497 [2024-07-22 13:14:00.783600] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:51:41.497 13:14:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:51:41.497 13:14:00 -- target/abort_qd_sizes.sh@53 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:spdk_target 00:51:41.497 13:14:00 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:51:41.497 13:14:00 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:51:41.497 13:14:00 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:51:41.497 13:14:00 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:51:41.497 13:14:00 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:51:41.497 13:14:00 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:51:41.497 13:14:00 -- target/abort_qd_sizes.sh@24 -- # local target r 00:51:41.497 13:14:00 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:51:41.497 13:14:00 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:51:41.497 13:14:00 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:51:41.497 13:14:00 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:51:41.497 13:14:00 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:51:41.497 13:14:00 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:51:41.497 13:14:00 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:51:41.497 13:14:00 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:51:41.497 13:14:00 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:51:41.497 13:14:00 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:51:41.497 13:14:00 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:51:41.497 13:14:00 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:51:41.497 13:14:00 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:51:44.798 Initializing NVMe Controllers 00:51:44.798 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:51:44.798 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:51:44.798 Initialization complete. Launching workers. 00:51:44.798 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 10404, failed: 0 00:51:44.798 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1083, failed to submit 9321 00:51:44.798 success 787, unsuccess 296, failed 0 00:51:44.798 13:14:03 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:51:44.798 13:14:03 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:51:48.081 [2024-07-22 13:14:07.219235] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c43e0 is same with the state(5) to be set 00:51:48.081 [2024-07-22 13:14:07.219845] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c43e0 is same with the state(5) to be set 00:51:48.081 [2024-07-22 13:14:07.219882] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c43e0 is same with the state(5) to be set 00:51:48.081 [2024-07-22 13:14:07.219892] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c43e0 is same with the state(5) to be set 00:51:48.081 [2024-07-22 13:14:07.219900] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c43e0 is same with the state(5) to be set 00:51:48.081 [2024-07-22 13:14:07.219908] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c43e0 is same with the state(5) to be set 00:51:48.081 [2024-07-22 13:14:07.219916] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c43e0 is same with the state(5) to be set 00:51:48.081 [2024-07-22 13:14:07.219924] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c43e0 is same with the state(5) to be set 00:51:48.081 [2024-07-22 13:14:07.219947] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c43e0 is same with the state(5) to be set 00:51:48.081 [2024-07-22 13:14:07.219954] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c43e0 is same with the state(5) to be set 00:51:48.081 [2024-07-22 13:14:07.219962] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c43e0 is same with the state(5) to be set 00:51:48.081 [2024-07-22 13:14:07.219969] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c43e0 is same with the state(5) to be set 00:51:48.081 Initializing NVMe Controllers 00:51:48.081 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:51:48.081 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:51:48.081 Initialization complete. Launching workers. 00:51:48.081 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 5974, failed: 0 00:51:48.081 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1244, failed to submit 4730 00:51:48.081 success 266, unsuccess 978, failed 0 00:51:48.081 13:14:07 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:51:48.081 13:14:07 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:51:51.365 Initializing NVMe Controllers 00:51:51.365 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:51:51.365 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:51:51.365 Initialization complete. Launching workers. 00:51:51.365 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 31130, failed: 0 00:51:51.365 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 2657, failed to submit 28473 00:51:51.365 success 466, unsuccess 2191, failed 0 00:51:51.365 13:14:10 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:spdk_target 00:51:51.365 13:14:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:51:51.365 13:14:10 -- common/autotest_common.sh@10 -- # set +x 00:51:51.365 13:14:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:51:51.365 13:14:10 -- target/abort_qd_sizes.sh@56 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:51:51.365 13:14:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:51:51.365 13:14:10 -- common/autotest_common.sh@10 -- # set +x 00:51:51.624 13:14:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:51:51.624 13:14:11 -- target/abort_qd_sizes.sh@62 -- # killprocess 102582 00:51:51.624 13:14:11 -- common/autotest_common.sh@926 -- # '[' -z 102582 ']' 00:51:51.624 13:14:11 -- common/autotest_common.sh@930 -- # kill -0 102582 00:51:51.624 13:14:11 -- common/autotest_common.sh@931 -- # uname 00:51:51.624 13:14:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:51:51.624 13:14:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 102582 00:51:51.624 13:14:11 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:51:51.624 killing process with pid 102582 00:51:51.624 13:14:11 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:51:51.624 13:14:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 102582' 00:51:51.624 13:14:11 -- common/autotest_common.sh@945 -- # kill 102582 00:51:51.624 13:14:11 -- common/autotest_common.sh@950 -- # wait 102582 00:51:51.882 ************************************ 00:51:51.882 END TEST spdk_target_abort 00:51:51.882 ************************************ 00:51:51.882 00:51:51.882 real 0m10.583s 00:51:51.882 user 0m43.508s 00:51:51.882 sys 0m1.642s 00:51:51.882 13:14:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:51:51.882 13:14:11 -- common/autotest_common.sh@10 -- # set +x 00:51:51.882 13:14:11 -- target/abort_qd_sizes.sh@84 -- # run_test kernel_target_abort kernel_target 00:51:51.882 13:14:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:51:51.882 13:14:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:51:51.882 13:14:11 -- common/autotest_common.sh@10 -- # set +x 00:51:51.882 ************************************ 00:51:51.882 START TEST kernel_target_abort 00:51:51.882 ************************************ 00:51:52.141 13:14:11 -- common/autotest_common.sh@1104 -- # kernel_target 00:51:52.141 13:14:11 -- target/abort_qd_sizes.sh@66 -- # local name=kernel_target 00:51:52.141 13:14:11 -- target/abort_qd_sizes.sh@68 -- # configure_kernel_target kernel_target 00:51:52.141 13:14:11 -- nvmf/common.sh@621 -- # kernel_name=kernel_target 00:51:52.141 13:14:11 -- nvmf/common.sh@622 -- # nvmet=/sys/kernel/config/nvmet 00:51:52.141 13:14:11 -- nvmf/common.sh@623 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/kernel_target 00:51:52.141 13:14:11 -- nvmf/common.sh@624 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:51:52.141 13:14:11 -- nvmf/common.sh@625 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:51:52.141 13:14:11 -- nvmf/common.sh@627 -- # local block nvme 00:51:52.141 13:14:11 -- nvmf/common.sh@629 -- # [[ ! -e /sys/module/nvmet ]] 00:51:52.141 13:14:11 -- nvmf/common.sh@630 -- # modprobe nvmet 00:51:52.141 13:14:11 -- nvmf/common.sh@633 -- # [[ -e /sys/kernel/config/nvmet ]] 00:51:52.141 13:14:11 -- nvmf/common.sh@635 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:51:52.400 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:51:52.400 Waiting for block devices as requested 00:51:52.400 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:51:52.400 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:51:52.658 13:14:11 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:51:52.658 13:14:11 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme0n1 ]] 00:51:52.658 13:14:11 -- nvmf/common.sh@640 -- # block_in_use nvme0n1 00:51:52.658 13:14:11 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:51:52.658 13:14:11 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:51:52.658 No valid GPT data, bailing 00:51:52.658 13:14:11 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:51:52.658 13:14:11 -- scripts/common.sh@393 -- # pt= 00:51:52.658 13:14:11 -- scripts/common.sh@394 -- # return 1 00:51:52.658 13:14:11 -- nvmf/common.sh@640 -- # nvme=/dev/nvme0n1 00:51:52.658 13:14:11 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:51:52.658 13:14:11 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n1 ]] 00:51:52.658 13:14:11 -- nvmf/common.sh@640 -- # block_in_use nvme1n1 00:51:52.658 13:14:11 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:51:52.658 13:14:11 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:51:52.658 No valid GPT data, bailing 00:51:52.658 13:14:11 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:51:52.658 13:14:11 -- scripts/common.sh@393 -- # pt= 00:51:52.658 13:14:11 -- scripts/common.sh@394 -- # return 1 00:51:52.658 13:14:11 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n1 00:51:52.658 13:14:11 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:51:52.658 13:14:11 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n2 ]] 00:51:52.658 13:14:11 -- nvmf/common.sh@640 -- # block_in_use nvme1n2 00:51:52.658 13:14:11 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:51:52.658 13:14:11 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:51:52.658 No valid GPT data, bailing 00:51:52.658 13:14:12 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:51:52.658 13:14:12 -- scripts/common.sh@393 -- # pt= 00:51:52.658 13:14:12 -- scripts/common.sh@394 -- # return 1 00:51:52.658 13:14:12 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n2 00:51:52.658 13:14:12 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:51:52.658 13:14:12 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n3 ]] 00:51:52.658 13:14:12 -- nvmf/common.sh@640 -- # block_in_use nvme1n3 00:51:52.658 13:14:12 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:51:52.658 13:14:12 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:51:52.916 No valid GPT data, bailing 00:51:52.916 13:14:12 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:51:52.916 13:14:12 -- scripts/common.sh@393 -- # pt= 00:51:52.916 13:14:12 -- scripts/common.sh@394 -- # return 1 00:51:52.916 13:14:12 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n3 00:51:52.916 13:14:12 -- nvmf/common.sh@643 -- # [[ -b /dev/nvme1n3 ]] 00:51:52.916 13:14:12 -- nvmf/common.sh@645 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:51:52.916 13:14:12 -- nvmf/common.sh@646 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:51:52.916 13:14:12 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:51:52.916 13:14:12 -- nvmf/common.sh@652 -- # echo SPDK-kernel_target 00:51:52.916 13:14:12 -- nvmf/common.sh@654 -- # echo 1 00:51:52.916 13:14:12 -- nvmf/common.sh@655 -- # echo /dev/nvme1n3 00:51:52.916 13:14:12 -- nvmf/common.sh@656 -- # echo 1 00:51:52.916 13:14:12 -- nvmf/common.sh@662 -- # echo 10.0.0.1 00:51:52.916 13:14:12 -- nvmf/common.sh@663 -- # echo tcp 00:51:52.916 13:14:12 -- nvmf/common.sh@664 -- # echo 4420 00:51:52.916 13:14:12 -- nvmf/common.sh@665 -- # echo ipv4 00:51:52.916 13:14:12 -- nvmf/common.sh@668 -- # ln -s /sys/kernel/config/nvmet/subsystems/kernel_target /sys/kernel/config/nvmet/ports/1/subsystems/ 00:51:52.916 13:14:12 -- nvmf/common.sh@671 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:864ae4a3-cd96-439b-8ef2-6dab4d992115 --hostid=864ae4a3-cd96-439b-8ef2-6dab4d992115 -a 10.0.0.1 -t tcp -s 4420 00:51:52.916 00:51:52.916 Discovery Log Number of Records 2, Generation counter 2 00:51:52.916 =====Discovery Log Entry 0====== 00:51:52.916 trtype: tcp 00:51:52.916 adrfam: ipv4 00:51:52.916 subtype: current discovery subsystem 00:51:52.916 treq: not specified, sq flow control disable supported 00:51:52.916 portid: 1 00:51:52.916 trsvcid: 4420 00:51:52.916 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:51:52.916 traddr: 10.0.0.1 00:51:52.916 eflags: none 00:51:52.916 sectype: none 00:51:52.916 =====Discovery Log Entry 1====== 00:51:52.916 trtype: tcp 00:51:52.916 adrfam: ipv4 00:51:52.916 subtype: nvme subsystem 00:51:52.916 treq: not specified, sq flow control disable supported 00:51:52.916 portid: 1 00:51:52.916 trsvcid: 4420 00:51:52.917 subnqn: kernel_target 00:51:52.917 traddr: 10.0.0.1 00:51:52.917 eflags: none 00:51:52.917 sectype: none 00:51:52.917 13:14:12 -- target/abort_qd_sizes.sh@69 -- # rabort tcp IPv4 10.0.0.1 4420 kernel_target 00:51:52.917 13:14:12 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:51:52.917 13:14:12 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:51:52.917 13:14:12 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:51:52.917 13:14:12 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:51:52.917 13:14:12 -- target/abort_qd_sizes.sh@21 -- # local subnqn=kernel_target 00:51:52.917 13:14:12 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:51:52.917 13:14:12 -- target/abort_qd_sizes.sh@24 -- # local target r 00:51:52.917 13:14:12 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:51:52.917 13:14:12 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:51:52.917 13:14:12 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:51:52.917 13:14:12 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:51:52.917 13:14:12 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:51:52.917 13:14:12 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:51:52.917 13:14:12 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:51:52.917 13:14:12 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:51:52.917 13:14:12 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:51:52.917 13:14:12 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:51:52.917 13:14:12 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:51:52.917 13:14:12 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:51:52.917 13:14:12 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:51:56.204 Initializing NVMe Controllers 00:51:56.204 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:51:56.204 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:51:56.204 Initialization complete. Launching workers. 00:51:56.204 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 30982, failed: 0 00:51:56.204 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 30982, failed to submit 0 00:51:56.204 success 0, unsuccess 30982, failed 0 00:51:56.204 13:14:15 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:51:56.204 13:14:15 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:51:59.494 Initializing NVMe Controllers 00:51:59.494 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:51:59.494 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:51:59.494 Initialization complete. Launching workers. 00:51:59.494 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 68448, failed: 0 00:51:59.494 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 27733, failed to submit 40715 00:51:59.494 success 0, unsuccess 27733, failed 0 00:51:59.494 13:14:18 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:51:59.494 13:14:18 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:52:02.778 Initializing NVMe Controllers 00:52:02.778 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:52:02.778 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:52:02.778 Initialization complete. Launching workers. 00:52:02.778 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 73060, failed: 0 00:52:02.778 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 18222, failed to submit 54838 00:52:02.778 success 0, unsuccess 18222, failed 0 00:52:02.778 13:14:21 -- target/abort_qd_sizes.sh@70 -- # clean_kernel_target 00:52:02.778 13:14:21 -- nvmf/common.sh@675 -- # [[ -e /sys/kernel/config/nvmet/subsystems/kernel_target ]] 00:52:02.778 13:14:21 -- nvmf/common.sh@677 -- # echo 0 00:52:02.778 13:14:21 -- nvmf/common.sh@679 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/kernel_target 00:52:02.778 13:14:21 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:52:02.778 13:14:21 -- nvmf/common.sh@681 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:52:02.778 13:14:21 -- nvmf/common.sh@682 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:52:02.778 13:14:21 -- nvmf/common.sh@684 -- # modules=(/sys/module/nvmet/holders/*) 00:52:02.778 13:14:21 -- nvmf/common.sh@686 -- # modprobe -r nvmet_tcp nvmet 00:52:02.778 ************************************ 00:52:02.778 END TEST kernel_target_abort 00:52:02.778 ************************************ 00:52:02.778 00:52:02.778 real 0m10.405s 00:52:02.778 user 0m5.144s 00:52:02.778 sys 0m2.491s 00:52:02.778 13:14:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:52:02.778 13:14:21 -- common/autotest_common.sh@10 -- # set +x 00:52:02.778 13:14:21 -- target/abort_qd_sizes.sh@86 -- # trap - SIGINT SIGTERM EXIT 00:52:02.778 13:14:21 -- target/abort_qd_sizes.sh@87 -- # nvmftestfini 00:52:02.778 13:14:21 -- nvmf/common.sh@476 -- # nvmfcleanup 00:52:02.778 13:14:21 -- nvmf/common.sh@116 -- # sync 00:52:02.778 13:14:21 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:52:02.778 13:14:21 -- nvmf/common.sh@119 -- # set +e 00:52:02.778 13:14:21 -- nvmf/common.sh@120 -- # for i in {1..20} 00:52:02.778 13:14:21 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:52:02.778 rmmod nvme_tcp 00:52:02.778 rmmod nvme_fabrics 00:52:02.778 rmmod nvme_keyring 00:52:02.778 13:14:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:52:02.778 13:14:21 -- nvmf/common.sh@123 -- # set -e 00:52:02.778 13:14:21 -- nvmf/common.sh@124 -- # return 0 00:52:02.778 13:14:21 -- nvmf/common.sh@477 -- # '[' -n 102582 ']' 00:52:02.778 13:14:21 -- nvmf/common.sh@478 -- # killprocess 102582 00:52:02.778 13:14:21 -- common/autotest_common.sh@926 -- # '[' -z 102582 ']' 00:52:02.778 13:14:21 -- common/autotest_common.sh@930 -- # kill -0 102582 00:52:02.779 Process with pid 102582 is not found 00:52:02.779 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (102582) - No such process 00:52:02.779 13:14:21 -- common/autotest_common.sh@953 -- # echo 'Process with pid 102582 is not found' 00:52:02.779 13:14:21 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:52:02.779 13:14:21 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:52:03.345 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:52:03.345 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:52:03.345 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:52:03.346 13:14:22 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:52:03.346 13:14:22 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:52:03.346 13:14:22 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:52:03.346 13:14:22 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:52:03.346 13:14:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:52:03.346 13:14:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:52:03.346 13:14:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:52:03.346 13:14:22 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:52:03.346 00:52:03.346 real 0m24.368s 00:52:03.346 user 0m50.024s 00:52:03.346 sys 0m5.370s 00:52:03.346 13:14:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:52:03.346 13:14:22 -- common/autotest_common.sh@10 -- # set +x 00:52:03.346 ************************************ 00:52:03.346 END TEST nvmf_abort_qd_sizes 00:52:03.346 ************************************ 00:52:03.346 13:14:22 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:52:03.346 13:14:22 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:52:03.346 13:14:22 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:52:03.346 13:14:22 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:52:03.346 13:14:22 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:52:03.346 13:14:22 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:52:03.346 13:14:22 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:52:03.346 13:14:22 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:52:03.346 13:14:22 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:52:03.346 13:14:22 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:52:03.346 13:14:22 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:52:03.346 13:14:22 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:52:03.346 13:14:22 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:52:03.346 13:14:22 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:52:03.346 13:14:22 -- spdk/autotest.sh@378 -- # [[ 0 -eq 1 ]] 00:52:03.346 13:14:22 -- spdk/autotest.sh@383 -- # trap - SIGINT SIGTERM EXIT 00:52:03.346 13:14:22 -- spdk/autotest.sh@385 -- # timing_enter post_cleanup 00:52:03.346 13:14:22 -- common/autotest_common.sh@712 -- # xtrace_disable 00:52:03.346 13:14:22 -- common/autotest_common.sh@10 -- # set +x 00:52:03.346 13:14:22 -- spdk/autotest.sh@386 -- # autotest_cleanup 00:52:03.346 13:14:22 -- common/autotest_common.sh@1371 -- # local autotest_es=0 00:52:03.346 13:14:22 -- common/autotest_common.sh@1372 -- # xtrace_disable 00:52:03.346 13:14:22 -- common/autotest_common.sh@10 -- # set +x 00:52:05.248 INFO: APP EXITING 00:52:05.248 INFO: killing all VMs 00:52:05.248 INFO: killing vhost app 00:52:05.248 INFO: EXIT DONE 00:52:05.507 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:52:05.766 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:52:05.766 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:52:06.337 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:52:06.337 Cleaning 00:52:06.337 Removing: /var/run/dpdk/spdk0/config 00:52:06.337 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:52:06.596 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:52:06.596 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:52:06.596 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:52:06.596 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:52:06.596 Removing: /var/run/dpdk/spdk0/hugepage_info 00:52:06.596 Removing: /var/run/dpdk/spdk1/config 00:52:06.596 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:52:06.596 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:52:06.596 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:52:06.596 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:52:06.596 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:52:06.596 Removing: /var/run/dpdk/spdk1/hugepage_info 00:52:06.596 Removing: /var/run/dpdk/spdk2/config 00:52:06.596 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:52:06.596 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:52:06.596 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:52:06.596 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:52:06.596 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:52:06.596 Removing: /var/run/dpdk/spdk2/hugepage_info 00:52:06.596 Removing: /var/run/dpdk/spdk3/config 00:52:06.596 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:52:06.596 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:52:06.596 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:52:06.596 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:52:06.596 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:52:06.596 Removing: /var/run/dpdk/spdk3/hugepage_info 00:52:06.596 Removing: /var/run/dpdk/spdk4/config 00:52:06.596 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:52:06.596 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:52:06.596 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:52:06.596 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:52:06.596 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:52:06.596 Removing: /var/run/dpdk/spdk4/hugepage_info 00:52:06.596 Removing: /dev/shm/nvmf_trace.0 00:52:06.596 Removing: /dev/shm/spdk_tgt_trace.pid67334 00:52:06.596 Removing: /var/run/dpdk/spdk0 00:52:06.596 Removing: /var/run/dpdk/spdk1 00:52:06.596 Removing: /var/run/dpdk/spdk2 00:52:06.596 Removing: /var/run/dpdk/spdk3 00:52:06.596 Removing: /var/run/dpdk/spdk4 00:52:06.596 Removing: /var/run/dpdk/spdk_pid100094 00:52:06.596 Removing: /var/run/dpdk/spdk_pid100391 00:52:06.596 Removing: /var/run/dpdk/spdk_pid100939 00:52:06.596 Removing: /var/run/dpdk/spdk_pid100944 00:52:06.596 Removing: /var/run/dpdk/spdk_pid101303 00:52:06.596 Removing: /var/run/dpdk/spdk_pid101462 00:52:06.596 Removing: /var/run/dpdk/spdk_pid101619 00:52:06.596 Removing: /var/run/dpdk/spdk_pid101716 00:52:06.596 Removing: /var/run/dpdk/spdk_pid101869 00:52:06.596 Removing: /var/run/dpdk/spdk_pid101984 00:52:06.596 Removing: /var/run/dpdk/spdk_pid102657 00:52:06.596 Removing: /var/run/dpdk/spdk_pid102692 00:52:06.596 Removing: /var/run/dpdk/spdk_pid102722 00:52:06.596 Removing: /var/run/dpdk/spdk_pid102970 00:52:06.596 Removing: /var/run/dpdk/spdk_pid103005 00:52:06.596 Removing: /var/run/dpdk/spdk_pid103035 00:52:06.596 Removing: /var/run/dpdk/spdk_pid67190 00:52:06.596 Removing: /var/run/dpdk/spdk_pid67334 00:52:06.596 Removing: /var/run/dpdk/spdk_pid67634 00:52:06.596 Removing: /var/run/dpdk/spdk_pid67914 00:52:06.596 Removing: /var/run/dpdk/spdk_pid68089 00:52:06.596 Removing: /var/run/dpdk/spdk_pid68169 00:52:06.596 Removing: /var/run/dpdk/spdk_pid68250 00:52:06.596 Removing: /var/run/dpdk/spdk_pid68344 00:52:06.596 Removing: /var/run/dpdk/spdk_pid68383 00:52:06.596 Removing: /var/run/dpdk/spdk_pid68418 00:52:06.596 Removing: /var/run/dpdk/spdk_pid68473 00:52:06.596 Removing: /var/run/dpdk/spdk_pid68575 00:52:06.596 Removing: /var/run/dpdk/spdk_pid69205 00:52:06.596 Removing: /var/run/dpdk/spdk_pid69268 00:52:06.596 Removing: /var/run/dpdk/spdk_pid69333 00:52:06.596 Removing: /var/run/dpdk/spdk_pid69361 00:52:06.596 Removing: /var/run/dpdk/spdk_pid69445 00:52:06.596 Removing: /var/run/dpdk/spdk_pid69473 00:52:06.596 Removing: /var/run/dpdk/spdk_pid69558 00:52:06.596 Removing: /var/run/dpdk/spdk_pid69585 00:52:06.596 Removing: /var/run/dpdk/spdk_pid69632 00:52:06.596 Removing: /var/run/dpdk/spdk_pid69662 00:52:06.596 Removing: /var/run/dpdk/spdk_pid69708 00:52:06.596 Removing: /var/run/dpdk/spdk_pid69738 00:52:06.596 Removing: /var/run/dpdk/spdk_pid69889 00:52:06.596 Removing: /var/run/dpdk/spdk_pid69921 00:52:06.596 Removing: /var/run/dpdk/spdk_pid70000 00:52:06.596 Removing: /var/run/dpdk/spdk_pid70064 00:52:06.596 Removing: /var/run/dpdk/spdk_pid70094 00:52:06.855 Removing: /var/run/dpdk/spdk_pid70153 00:52:06.855 Removing: /var/run/dpdk/spdk_pid70172 00:52:06.855 Removing: /var/run/dpdk/spdk_pid70207 00:52:06.855 Removing: /var/run/dpdk/spdk_pid70225 00:52:06.855 Removing: /var/run/dpdk/spdk_pid70255 00:52:06.855 Removing: /var/run/dpdk/spdk_pid70275 00:52:06.855 Removing: /var/run/dpdk/spdk_pid70309 00:52:06.855 Removing: /var/run/dpdk/spdk_pid70329 00:52:06.855 Removing: /var/run/dpdk/spdk_pid70363 00:52:06.855 Removing: /var/run/dpdk/spdk_pid70385 00:52:06.855 Removing: /var/run/dpdk/spdk_pid70418 00:52:06.855 Removing: /var/run/dpdk/spdk_pid70439 00:52:06.855 Removing: /var/run/dpdk/spdk_pid70468 00:52:06.855 Removing: /var/run/dpdk/spdk_pid70494 00:52:06.855 Removing: /var/run/dpdk/spdk_pid70523 00:52:06.855 Removing: /var/run/dpdk/spdk_pid70548 00:52:06.855 Removing: /var/run/dpdk/spdk_pid70577 00:52:06.855 Removing: /var/run/dpdk/spdk_pid70597 00:52:06.855 Removing: /var/run/dpdk/spdk_pid70631 00:52:06.855 Removing: /var/run/dpdk/spdk_pid70645 00:52:06.855 Removing: /var/run/dpdk/spdk_pid70685 00:52:06.855 Removing: /var/run/dpdk/spdk_pid70699 00:52:06.855 Removing: /var/run/dpdk/spdk_pid70739 00:52:06.855 Removing: /var/run/dpdk/spdk_pid70753 00:52:06.855 Removing: /var/run/dpdk/spdk_pid70788 00:52:06.855 Removing: /var/run/dpdk/spdk_pid70807 00:52:06.855 Removing: /var/run/dpdk/spdk_pid70842 00:52:06.855 Removing: /var/run/dpdk/spdk_pid70861 00:52:06.855 Removing: /var/run/dpdk/spdk_pid70896 00:52:06.855 Removing: /var/run/dpdk/spdk_pid70915 00:52:06.855 Removing: /var/run/dpdk/spdk_pid70950 00:52:06.855 Removing: /var/run/dpdk/spdk_pid70964 00:52:06.855 Removing: /var/run/dpdk/spdk_pid71004 00:52:06.855 Removing: /var/run/dpdk/spdk_pid71021 00:52:06.855 Removing: /var/run/dpdk/spdk_pid71064 00:52:06.855 Removing: /var/run/dpdk/spdk_pid71081 00:52:06.855 Removing: /var/run/dpdk/spdk_pid71124 00:52:06.855 Removing: /var/run/dpdk/spdk_pid71138 00:52:06.855 Removing: /var/run/dpdk/spdk_pid71180 00:52:06.855 Removing: /var/run/dpdk/spdk_pid71194 00:52:06.855 Removing: /var/run/dpdk/spdk_pid71234 00:52:06.855 Removing: /var/run/dpdk/spdk_pid71293 00:52:06.855 Removing: /var/run/dpdk/spdk_pid71403 00:52:06.855 Removing: /var/run/dpdk/spdk_pid71810 00:52:06.855 Removing: /var/run/dpdk/spdk_pid78518 00:52:06.855 Removing: /var/run/dpdk/spdk_pid78858 00:52:06.855 Removing: /var/run/dpdk/spdk_pid81269 00:52:06.855 Removing: /var/run/dpdk/spdk_pid81650 00:52:06.855 Removing: /var/run/dpdk/spdk_pid81904 00:52:06.855 Removing: /var/run/dpdk/spdk_pid81951 00:52:06.855 Removing: /var/run/dpdk/spdk_pid82257 00:52:06.855 Removing: /var/run/dpdk/spdk_pid82313 00:52:06.855 Removing: /var/run/dpdk/spdk_pid82684 00:52:06.855 Removing: /var/run/dpdk/spdk_pid83207 00:52:06.855 Removing: /var/run/dpdk/spdk_pid83644 00:52:06.855 Removing: /var/run/dpdk/spdk_pid84600 00:52:06.855 Removing: /var/run/dpdk/spdk_pid85583 00:52:06.855 Removing: /var/run/dpdk/spdk_pid85694 00:52:06.855 Removing: /var/run/dpdk/spdk_pid85762 00:52:06.855 Removing: /var/run/dpdk/spdk_pid87216 00:52:06.856 Removing: /var/run/dpdk/spdk_pid87444 00:52:06.856 Removing: /var/run/dpdk/spdk_pid87889 00:52:06.856 Removing: /var/run/dpdk/spdk_pid88001 00:52:06.856 Removing: /var/run/dpdk/spdk_pid88153 00:52:06.856 Removing: /var/run/dpdk/spdk_pid88198 00:52:06.856 Removing: /var/run/dpdk/spdk_pid88246 00:52:06.856 Removing: /var/run/dpdk/spdk_pid88286 00:52:06.856 Removing: /var/run/dpdk/spdk_pid88450 00:52:06.856 Removing: /var/run/dpdk/spdk_pid88603 00:52:06.856 Removing: /var/run/dpdk/spdk_pid88867 00:52:06.856 Removing: /var/run/dpdk/spdk_pid88984 00:52:06.856 Removing: /var/run/dpdk/spdk_pid89393 00:52:06.856 Removing: /var/run/dpdk/spdk_pid89769 00:52:06.856 Removing: /var/run/dpdk/spdk_pid89771 00:52:06.856 Removing: /var/run/dpdk/spdk_pid92006 00:52:06.856 Removing: /var/run/dpdk/spdk_pid92309 00:52:06.856 Removing: /var/run/dpdk/spdk_pid92797 00:52:06.856 Removing: /var/run/dpdk/spdk_pid92805 00:52:06.856 Removing: /var/run/dpdk/spdk_pid93143 00:52:06.856 Removing: /var/run/dpdk/spdk_pid93163 00:52:06.856 Removing: /var/run/dpdk/spdk_pid93181 00:52:06.856 Removing: /var/run/dpdk/spdk_pid93213 00:52:06.856 Removing: /var/run/dpdk/spdk_pid93219 00:52:06.856 Removing: /var/run/dpdk/spdk_pid93357 00:52:06.856 Removing: /var/run/dpdk/spdk_pid93359 00:52:06.856 Removing: /var/run/dpdk/spdk_pid93467 00:52:06.856 Removing: /var/run/dpdk/spdk_pid93469 00:52:06.856 Removing: /var/run/dpdk/spdk_pid93583 00:52:06.856 Removing: /var/run/dpdk/spdk_pid93585 00:52:07.114 Removing: /var/run/dpdk/spdk_pid94061 00:52:07.114 Removing: /var/run/dpdk/spdk_pid94104 00:52:07.114 Removing: /var/run/dpdk/spdk_pid94261 00:52:07.114 Removing: /var/run/dpdk/spdk_pid94376 00:52:07.114 Removing: /var/run/dpdk/spdk_pid94765 00:52:07.114 Removing: /var/run/dpdk/spdk_pid95016 00:52:07.114 Removing: /var/run/dpdk/spdk_pid95496 00:52:07.114 Removing: /var/run/dpdk/spdk_pid96066 00:52:07.114 Removing: /var/run/dpdk/spdk_pid96530 00:52:07.114 Removing: /var/run/dpdk/spdk_pid96619 00:52:07.114 Removing: /var/run/dpdk/spdk_pid96705 00:52:07.114 Removing: /var/run/dpdk/spdk_pid96799 00:52:07.114 Removing: /var/run/dpdk/spdk_pid96953 00:52:07.114 Removing: /var/run/dpdk/spdk_pid97043 00:52:07.114 Removing: /var/run/dpdk/spdk_pid97128 00:52:07.114 Removing: /var/run/dpdk/spdk_pid97217 00:52:07.114 Removing: /var/run/dpdk/spdk_pid97557 00:52:07.114 Removing: /var/run/dpdk/spdk_pid98258 00:52:07.114 Removing: /var/run/dpdk/spdk_pid99604 00:52:07.114 Removing: /var/run/dpdk/spdk_pid99803 00:52:07.114 Clean 00:52:07.114 killing process with pid 61529 00:52:07.114 killing process with pid 61530 00:52:07.114 13:14:26 -- common/autotest_common.sh@1436 -- # return 0 00:52:07.114 13:14:26 -- spdk/autotest.sh@387 -- # timing_exit post_cleanup 00:52:07.114 13:14:26 -- common/autotest_common.sh@718 -- # xtrace_disable 00:52:07.114 13:14:26 -- common/autotest_common.sh@10 -- # set +x 00:52:07.114 13:14:26 -- spdk/autotest.sh@389 -- # timing_exit autotest 00:52:07.114 13:14:26 -- common/autotest_common.sh@718 -- # xtrace_disable 00:52:07.114 13:14:26 -- common/autotest_common.sh@10 -- # set +x 00:52:07.114 13:14:26 -- spdk/autotest.sh@390 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:52:07.114 13:14:26 -- spdk/autotest.sh@392 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:52:07.114 13:14:26 -- spdk/autotest.sh@392 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:52:07.114 13:14:26 -- spdk/autotest.sh@394 -- # hash lcov 00:52:07.114 13:14:26 -- spdk/autotest.sh@394 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:52:07.114 13:14:26 -- spdk/autotest.sh@396 -- # hostname 00:52:07.114 13:14:26 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:52:07.373 geninfo: WARNING: invalid characters removed from testname! 00:52:29.300 13:14:46 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:52:30.677 13:14:49 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:52:33.209 13:14:52 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:52:35.763 13:14:54 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:52:37.663 13:14:56 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:52:40.192 13:14:59 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:52:42.733 13:15:01 -- spdk/autotest.sh@403 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:52:42.733 13:15:01 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:52:42.733 13:15:01 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:52:42.733 13:15:01 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:52:42.733 13:15:01 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:52:42.733 13:15:01 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:52:42.733 13:15:01 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:52:42.733 13:15:01 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:52:42.733 13:15:01 -- paths/export.sh@5 -- $ export PATH 00:52:42.733 13:15:01 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:52:42.733 13:15:01 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:52:42.733 13:15:01 -- common/autobuild_common.sh@435 -- $ date +%s 00:52:42.733 13:15:01 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1721654101.XXXXXX 00:52:42.733 13:15:01 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1721654101.9gdfko 00:52:42.733 13:15:01 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:52:42.733 13:15:01 -- common/autobuild_common.sh@441 -- $ '[' -n v22.11.4 ']' 00:52:42.733 13:15:01 -- common/autobuild_common.sh@442 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:52:42.733 13:15:01 -- common/autobuild_common.sh@442 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:52:42.733 13:15:01 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:52:42.733 13:15:01 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:52:42.733 13:15:01 -- common/autobuild_common.sh@451 -- $ get_config_params 00:52:42.733 13:15:01 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:52:42.733 13:15:01 -- common/autotest_common.sh@10 -- $ set +x 00:52:42.733 13:15:01 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-avahi --with-golang' 00:52:42.733 13:15:01 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:52:42.733 13:15:01 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:52:42.733 13:15:01 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:52:42.733 13:15:01 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:52:42.733 13:15:01 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:52:42.733 13:15:01 -- spdk/autopackage.sh@19 -- $ timing_finish 00:52:42.733 13:15:01 -- common/autotest_common.sh@724 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:52:42.733 13:15:01 -- common/autotest_common.sh@725 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:52:42.733 13:15:01 -- common/autotest_common.sh@727 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:52:42.733 13:15:01 -- spdk/autopackage.sh@20 -- $ exit 0 00:52:42.733 + [[ -n 5982 ]] 00:52:42.733 + sudo kill 5982 00:52:42.744 [Pipeline] } 00:52:42.764 [Pipeline] // timeout 00:52:42.769 [Pipeline] } 00:52:42.784 [Pipeline] // stage 00:52:42.790 [Pipeline] } 00:52:42.808 [Pipeline] // catchError 00:52:42.817 [Pipeline] stage 00:52:42.819 [Pipeline] { (Stop VM) 00:52:42.832 [Pipeline] sh 00:52:43.109 + vagrant halt 00:52:45.639 ==> default: Halting domain... 00:52:52.247 [Pipeline] sh 00:52:52.526 + vagrant destroy -f 00:52:55.808 ==> default: Removing domain... 00:52:55.820 [Pipeline] sh 00:52:56.099 + mv output /var/jenkins/workspace/nvmf-tcp-vg-autotest/output 00:52:56.107 [Pipeline] } 00:52:56.125 [Pipeline] // stage 00:52:56.130 [Pipeline] } 00:52:56.147 [Pipeline] // dir 00:52:56.152 [Pipeline] } 00:52:56.169 [Pipeline] // wrap 00:52:56.175 [Pipeline] } 00:52:56.190 [Pipeline] // catchError 00:52:56.199 [Pipeline] stage 00:52:56.202 [Pipeline] { (Epilogue) 00:52:56.215 [Pipeline] sh 00:52:56.494 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:53:03.066 [Pipeline] catchError 00:53:03.068 [Pipeline] { 00:53:03.081 [Pipeline] sh 00:53:03.361 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:53:03.361 Artifacts sizes are good 00:53:03.370 [Pipeline] } 00:53:03.386 [Pipeline] // catchError 00:53:03.396 [Pipeline] archiveArtifacts 00:53:03.403 Archiving artifacts 00:53:03.586 [Pipeline] cleanWs 00:53:03.597 [WS-CLEANUP] Deleting project workspace... 00:53:03.598 [WS-CLEANUP] Deferred wipeout is used... 00:53:03.605 [WS-CLEANUP] done 00:53:03.607 [Pipeline] } 00:53:03.625 [Pipeline] // stage 00:53:03.630 [Pipeline] } 00:53:03.646 [Pipeline] // node 00:53:03.651 [Pipeline] End of Pipeline 00:53:03.688 Finished: SUCCESS